Dec 09 12:06:26 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 12:06:26 crc restorecon[4696]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:26 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 12:06:27 crc restorecon[4696]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 09 12:06:27 crc kubenswrapper[4970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:06:27 crc kubenswrapper[4970]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 12:06:27 crc kubenswrapper[4970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:06:27 crc kubenswrapper[4970]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:06:27 crc kubenswrapper[4970]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 09 12:06:27 crc kubenswrapper[4970]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.659025 4970 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662053 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662077 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662085 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662574 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662592 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662599 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662605 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662611 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662617 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662623 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662628 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662633 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662638 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662643 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662648 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662654 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662658 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662663 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662668 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662673 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662678 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662682 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662687 4970 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662692 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662697 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662701 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662706 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662710 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662715 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662720 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662725 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662731 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662737 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662742 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662748 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662753 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662759 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662764 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662769 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662774 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662779 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662783 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662788 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662793 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662797 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662803 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662807 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662812 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662817 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662821 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662827 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662832 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662837 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662843 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662849 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662855 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662860 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662865 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662872 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662880 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662887 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662892 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662899 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662905 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662909 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662914 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662919 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662924 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662929 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662934 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.662939 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663028 4970 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663038 4970 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663047 4970 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663054 4970 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663062 4970 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663093 4970 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663105 4970 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663114 4970 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663122 4970 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663129 4970 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663138 4970 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663146 4970 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663153 4970 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663160 4970 flags.go:64] FLAG: --cgroup-root="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663165 4970 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663171 4970 flags.go:64] FLAG: --client-ca-file="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663176 4970 flags.go:64] FLAG: --cloud-config="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663182 4970 flags.go:64] FLAG: --cloud-provider="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663187 4970 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663196 4970 flags.go:64] FLAG: --cluster-domain="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663202 4970 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663208 4970 flags.go:64] FLAG: --config-dir="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663213 4970 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663219 4970 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663226 4970 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663232 4970 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663237 4970 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663263 4970 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663269 4970 flags.go:64] FLAG: --contention-profiling="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663274 4970 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663280 4970 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663286 4970 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663291 4970 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663299 4970 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663305 4970 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663310 4970 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663316 4970 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663321 4970 flags.go:64] FLAG: --enable-server="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663327 4970 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663333 4970 flags.go:64] FLAG: --event-burst="100" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663339 4970 flags.go:64] FLAG: --event-qps="50" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663344 4970 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663350 4970 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663355 4970 flags.go:64] FLAG: --eviction-hard="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663362 4970 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663368 4970 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663373 4970 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663379 4970 flags.go:64] FLAG: --eviction-soft="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663385 4970 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663391 4970 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663396 4970 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663402 4970 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663408 4970 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663414 4970 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663420 4970 flags.go:64] FLAG: --feature-gates="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663426 4970 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663433 4970 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663439 4970 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663445 4970 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663451 4970 flags.go:64] FLAG: --healthz-port="10248" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663457 4970 flags.go:64] FLAG: --help="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663462 4970 flags.go:64] FLAG: --hostname-override="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663468 4970 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663474 4970 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663479 4970 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663484 4970 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663490 4970 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663495 4970 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663501 4970 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663507 4970 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663512 4970 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663518 4970 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663523 4970 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663529 4970 flags.go:64] FLAG: --kube-reserved="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663535 4970 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663540 4970 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663545 4970 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663551 4970 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663556 4970 flags.go:64] FLAG: --lock-file="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663562 4970 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663567 4970 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663576 4970 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663585 4970 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663590 4970 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663596 4970 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663602 4970 flags.go:64] FLAG: --logging-format="text" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663607 4970 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663613 4970 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663618 4970 flags.go:64] FLAG: --manifest-url="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663624 4970 flags.go:64] FLAG: --manifest-url-header="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663635 4970 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663641 4970 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663648 4970 flags.go:64] FLAG: --max-pods="110" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663654 4970 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663659 4970 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663665 4970 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663670 4970 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663676 4970 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663682 4970 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663687 4970 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663699 4970 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663704 4970 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663710 4970 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663716 4970 flags.go:64] FLAG: --pod-cidr="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663721 4970 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663730 4970 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663735 4970 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663741 4970 flags.go:64] FLAG: --pods-per-core="0" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663747 4970 flags.go:64] FLAG: --port="10250" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663752 4970 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663758 4970 flags.go:64] FLAG: --provider-id="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663763 4970 flags.go:64] FLAG: --qos-reserved="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663769 4970 flags.go:64] FLAG: --read-only-port="10255" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663777 4970 flags.go:64] FLAG: --register-node="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663783 4970 flags.go:64] FLAG: --register-schedulable="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663788 4970 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663797 4970 flags.go:64] FLAG: --registry-burst="10" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663803 4970 flags.go:64] FLAG: --registry-qps="5" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663809 4970 flags.go:64] FLAG: --reserved-cpus="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663814 4970 flags.go:64] FLAG: --reserved-memory="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663821 4970 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663827 4970 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663832 4970 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663838 4970 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663844 4970 flags.go:64] FLAG: --runonce="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663849 4970 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663855 4970 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663861 4970 flags.go:64] FLAG: --seccomp-default="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663867 4970 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663873 4970 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663879 4970 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663884 4970 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663890 4970 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663895 4970 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663901 4970 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663907 4970 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663912 4970 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663918 4970 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663924 4970 flags.go:64] FLAG: --system-cgroups="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663929 4970 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663938 4970 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663944 4970 flags.go:64] FLAG: --tls-cert-file="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663949 4970 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663956 4970 flags.go:64] FLAG: --tls-min-version="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663962 4970 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663969 4970 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663975 4970 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663980 4970 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663986 4970 flags.go:64] FLAG: --v="2" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.663998 4970 flags.go:64] FLAG: --version="false" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.664005 4970 flags.go:64] FLAG: --vmodule="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.664012 4970 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.664018 4970 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664193 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664199 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664205 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664210 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664216 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664221 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664226 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664231 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664236 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664241 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664264 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664269 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664274 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664279 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664285 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664290 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664295 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664300 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664305 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664310 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664315 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664321 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664328 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664333 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664341 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664347 4970 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664352 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664356 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664363 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664368 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664373 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664378 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664382 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664387 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664394 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664400 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664405 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664411 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664417 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664422 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664427 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664432 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664437 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664442 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664447 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664452 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664457 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664462 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664467 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664471 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664476 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664483 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664489 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664496 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664502 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664508 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664515 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664521 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664525 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664530 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664537 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664542 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664547 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664552 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664557 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664562 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664567 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664572 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664577 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664582 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.664587 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.664601 4970 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.677116 4970 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.677172 4970 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677277 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677289 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677295 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677301 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677307 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677313 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677319 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677324 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677329 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677335 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677341 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677346 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677351 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677356 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677361 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677402 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677409 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677414 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677419 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677423 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677428 4970 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677432 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677435 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677439 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677443 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677447 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677451 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677458 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677462 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677466 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677470 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677474 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677477 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677483 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677491 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677495 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677501 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677505 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677510 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677514 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677517 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677521 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677527 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677533 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677537 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677541 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677545 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677549 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677553 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677557 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677560 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677564 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677569 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677574 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677578 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677583 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677589 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677594 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677598 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677604 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677610 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677614 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677619 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677623 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677627 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677631 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677635 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677640 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677645 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677649 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677654 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.677662 4970 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677824 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677834 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677841 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677846 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677850 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677854 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677859 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677885 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677890 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677895 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677899 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677905 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677910 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677914 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677918 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677922 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677927 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677931 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677936 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677942 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677947 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677952 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677956 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677961 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677966 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677970 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677974 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677978 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677982 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677986 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677990 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677994 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.677998 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678002 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678006 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678010 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678014 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678018 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678022 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678026 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678030 4970 feature_gate.go:330] unrecognized feature gate: Example Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678035 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678040 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678044 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678048 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678052 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678056 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678059 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678064 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678068 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678071 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678075 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678079 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678084 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678088 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678092 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678096 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678101 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678104 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678108 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678113 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678117 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678121 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678125 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678130 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678135 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678140 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678145 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678149 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678153 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.678158 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.678165 4970 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.678826 4970 server.go:940] "Client rotation is on, will bootstrap in background" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.681514 4970 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.681634 4970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.683106 4970 server.go:997] "Starting client certificate rotation" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.683137 4970 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.683350 4970 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-15 03:22:24.743369675 +0000 UTC Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.683426 4970 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 879h15m57.059946769s for next certificate rotation Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.690318 4970 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.692379 4970 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.702366 4970 log.go:25] "Validated CRI v1 runtime API" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.726794 4970 log.go:25] "Validated CRI v1 image API" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.728586 4970 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.730848 4970 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-09-12-02-00-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.730881 4970 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.743025 4970 manager.go:217] Machine: {Timestamp:2025-12-09 12:06:27.742100409 +0000 UTC m=+0.302581480 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e BootID:e6d3f717-4497-45c3-b697-e6823c6fdf80 Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:cc:03:9c Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:cc:03:9c Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a3:70:cc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2b:94:5e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:89:47:93 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:47:f1:b8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:aa:a3:1d:b2:22:03 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:c3:e0:ba:e9:b7 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.743529 4970 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.743644 4970 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.746733 4970 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.747149 4970 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.747231 4970 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.747912 4970 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.747928 4970 container_manager_linux.go:303] "Creating device plugin manager" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.748115 4970 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.748361 4970 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.748686 4970 state_mem.go:36] "Initialized new in-memory state store" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.748773 4970 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.749478 4970 kubelet.go:418] "Attempting to sync node with API server" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.749503 4970 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.749529 4970 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.749542 4970 kubelet.go:324] "Adding apiserver pod source" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.749552 4970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.751948 4970 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.752568 4970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.753095 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.753404 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.753366 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.753518 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.754287 4970 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.754925 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.754958 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.754971 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.754984 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755001 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755010 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755022 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755037 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755049 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755059 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755080 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755089 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755119 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.755674 4970 server.go:1280] "Started kubelet" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.756066 4970 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.756199 4970 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.756152 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.756857 4970 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.757737 4970 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.757778 4970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.758007 4970 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 12:06:27 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.758855 4970 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.759068 4970 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.759125 4970 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.758739 4970 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:48:12.441621337 +0000 UTC Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.759338 4970 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 905h41m44.682291875s for next certificate rotation Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.760312 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="200ms" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.760110 4970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f8a9d7091e4f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:06:27.755631864 +0000 UTC m=+0.316112935,LastTimestamp:2025-12-09 12:06:27.755631864 +0000 UTC m=+0.316112935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.760894 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.761335 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.765352 4970 server.go:460] "Adding debug handlers to kubelet server" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.765355 4970 factory.go:55] Registering systemd factory Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.768141 4970 factory.go:221] Registration of the systemd container factory successfully Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.768916 4970 factory.go:153] Registering CRI-O factory Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.769024 4970 factory.go:221] Registration of the crio container factory successfully Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.769846 4970 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.770533 4970 factory.go:103] Registering Raw factory Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.770556 4970 manager.go:1196] Started watching for new ooms in manager Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.771382 4970 manager.go:319] Starting recovery of all containers Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779496 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779624 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779648 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779669 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779688 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779704 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779721 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779741 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779763 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779780 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779797 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779815 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779834 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779864 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779880 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779897 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779912 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779928 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779946 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779962 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779977 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.779995 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780039 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780057 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780080 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780097 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780115 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780133 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780152 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780176 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780197 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780262 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780282 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780298 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780339 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780358 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780374 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780390 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780405 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780422 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780439 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780456 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780472 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780490 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780507 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780526 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780559 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780576 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780594 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780610 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780625 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780644 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780666 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780685 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780702 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780720 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780736 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780753 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780770 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780788 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780803 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780820 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780863 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780878 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780898 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780913 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780927 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780946 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780961 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780977 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.780994 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781009 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781023 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781039 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781054 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781069 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781083 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781097 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781115 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781130 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781146 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781163 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781186 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781202 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781220 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781237 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781277 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781293 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781309 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781326 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781343 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781358 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781373 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781388 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781406 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781463 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781479 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781494 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.781511 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.783306 4970 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.783430 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.783536 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.783614 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784169 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784301 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784415 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784503 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784564 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784630 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784689 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784789 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784871 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784935 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.784999 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785072 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785139 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785198 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785280 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785355 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785441 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785524 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785603 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785701 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785784 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785844 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785901 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.785961 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786022 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786083 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786152 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786225 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786298 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786356 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786430 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786491 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786610 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786677 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.786738 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787017 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787095 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787159 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787231 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787342 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787426 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787502 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787578 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787637 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787693 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787764 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787825 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787881 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787935 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.787989 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788056 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788123 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788188 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788265 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788334 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788398 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788470 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788530 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.788590 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.789656 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.789756 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.789838 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790008 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790072 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790194 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790343 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790419 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790485 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790546 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790601 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790665 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790724 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790793 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790854 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790909 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.790975 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791039 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791097 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791161 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791219 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791294 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791352 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791417 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791487 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791550 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791608 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791664 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791720 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791788 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791845 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791907 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.791965 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792020 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792086 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792229 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792307 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792364 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792437 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792507 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792575 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792645 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792703 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792766 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792835 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792897 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.792953 4970 reconstruct.go:97] "Volume reconstruction finished" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.793030 4970 reconciler.go:26] "Reconciler: start to sync state" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.793107 4970 manager.go:324] Recovery completed Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.805651 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.808496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.808592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.808625 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.809069 4970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.810957 4970 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.810976 4970 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.810996 4970 state_mem.go:36] "Initialized new in-memory state store" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.811208 4970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.811268 4970 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.811294 4970 kubelet.go:2335] "Starting kubelet main sync loop" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.811347 4970 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 12:06:27 crc kubenswrapper[4970]: W1209 12:06:27.813346 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.813459 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.819462 4970 policy_none.go:49] "None policy: Start" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.820289 4970 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.820312 4970 state_mem.go:35] "Initializing new in-memory state store" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.858969 4970 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.888189 4970 manager.go:334] "Starting Device Plugin manager" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.888445 4970 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.888459 4970 server.go:79] "Starting device plugin registration server" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.888933 4970 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.888950 4970 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.889128 4970 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.889270 4970 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.889282 4970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.899520 4970 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.911832 4970 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.911906 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.912971 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.913041 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.913051 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.913237 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.913551 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.913614 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914494 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.914846 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.915027 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.915093 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916075 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916119 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916275 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916343 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916380 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916546 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916573 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.916982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917014 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917198 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917325 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917385 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917659 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917720 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.917731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918290 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918377 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918392 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918638 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.918690 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.919453 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.919495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.919513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.961316 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="400ms" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.990210 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.991436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.991469 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.991480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.991503 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:27 crc kubenswrapper[4970]: E1209 12:06:27.992020 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994540 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994590 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994643 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994664 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994730 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994796 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994838 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994891 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994924 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994949 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.994999 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.995031 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.995056 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:27 crc kubenswrapper[4970]: I1209 12:06:27.995077 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.095980 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096065 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096087 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096109 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096132 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096155 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096335 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096342 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096320 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096438 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096421 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096508 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096552 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096480 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096636 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096602 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096648 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096712 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096805 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096832 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096815 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096881 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096898 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096913 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.096928 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.097014 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.097037 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.097047 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.097404 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.192739 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.195358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.195399 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.195412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.195437 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:28 crc kubenswrapper[4970]: E1209 12:06:28.195911 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.243095 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: W1209 12:06:28.265750 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0f3399d8588808beac1de571b5abcbf8ff7980137250db268227dfc3050860f9 WatchSource:0}: Error finding container 0f3399d8588808beac1de571b5abcbf8ff7980137250db268227dfc3050860f9: Status 404 returned error can't find the container with id 0f3399d8588808beac1de571b5abcbf8ff7980137250db268227dfc3050860f9 Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.266290 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: W1209 12:06:28.284456 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-dcd94913e3312e7905d7a9604fc78c6f1bcf99541b8c588ebcdbc699a6326599 WatchSource:0}: Error finding container dcd94913e3312e7905d7a9604fc78c6f1bcf99541b8c588ebcdbc699a6326599: Status 404 returned error can't find the container with id dcd94913e3312e7905d7a9604fc78c6f1bcf99541b8c588ebcdbc699a6326599 Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.285110 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.299666 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.304662 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:28 crc kubenswrapper[4970]: W1209 12:06:28.317109 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-89a323c19b5d7038f18b711aab0fdc3cbe4cb1d71007922be96448e638a16da8 WatchSource:0}: Error finding container 89a323c19b5d7038f18b711aab0fdc3cbe4cb1d71007922be96448e638a16da8: Status 404 returned error can't find the container with id 89a323c19b5d7038f18b711aab0fdc3cbe4cb1d71007922be96448e638a16da8 Dec 09 12:06:28 crc kubenswrapper[4970]: W1209 12:06:28.324686 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6ff2e0bc1046ee223d9075a1cbcc63680e037960369cd99ade9246066fb60dfb WatchSource:0}: Error finding container 6ff2e0bc1046ee223d9075a1cbcc63680e037960369cd99ade9246066fb60dfb: Status 404 returned error can't find the container with id 6ff2e0bc1046ee223d9075a1cbcc63680e037960369cd99ade9246066fb60dfb Dec 09 12:06:28 crc kubenswrapper[4970]: E1209 12:06:28.362403 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="800ms" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.596767 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.597987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.598029 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.598044 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.598075 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:28 crc kubenswrapper[4970]: E1209 12:06:28.598588 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.758009 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:28 crc kubenswrapper[4970]: W1209 12:06:28.788860 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:28 crc kubenswrapper[4970]: E1209 12:06:28.788969 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.819116 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.819315 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0f3399d8588808beac1de571b5abcbf8ff7980137250db268227dfc3050860f9"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.820787 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e" exitCode=0 Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.820867 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.820907 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ff2e0bc1046ee223d9075a1cbcc63680e037960369cd99ade9246066fb60dfb"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.821007 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.822498 4970 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a" exitCode=0 Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.822589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.822613 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"89a323c19b5d7038f18b711aab0fdc3cbe4cb1d71007922be96448e638a16da8"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.822733 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.823576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.823606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.823616 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.824217 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.824283 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.824300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.825688 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826504 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="223c6d6e4fd5c360440eb4aad1917d2247ddca60018d1874bc150bd1e19d4861" exitCode=0 Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826560 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"223c6d6e4fd5c360440eb4aad1917d2247ddca60018d1874bc150bd1e19d4861"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e6fa0547444bfd22b100f8e7b6b6efecfd683a5519c495596da800ad2c4f8e6c"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826656 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826688 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826705 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.826716 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.827793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.827815 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.827829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.829422 4970 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4" exitCode=0 Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.829466 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.829761 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"dcd94913e3312e7905d7a9604fc78c6f1bcf99541b8c588ebcdbc699a6326599"} Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.829944 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.831078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.831271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:28 crc kubenswrapper[4970]: I1209 12:06:28.831375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4970]: E1209 12:06:29.163008 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="1.6s" Dec 09 12:06:29 crc kubenswrapper[4970]: W1209 12:06:29.166876 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:29 crc kubenswrapper[4970]: E1209 12:06:29.166989 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:29 crc kubenswrapper[4970]: W1209 12:06:29.176344 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:29 crc kubenswrapper[4970]: E1209 12:06:29.176416 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:29 crc kubenswrapper[4970]: W1209 12:06:29.248609 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.245:6443: connect: connection refused Dec 09 12:06:29 crc kubenswrapper[4970]: E1209 12:06:29.248680 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.399200 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.400504 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.400556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.400571 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.400609 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:29 crc kubenswrapper[4970]: E1209 12:06:29.401121 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.245:6443: connect: connection refused" node="crc" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.839405 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.839455 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.839468 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.839478 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.841459 4970 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed" exitCode=0 Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.841522 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.841631 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.842467 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.842494 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.842505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.843974 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2e32673ab8362205fab5b36abdefae2517d55a8220e99bb8f2efe3ceb8cad0ba"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.844041 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.844731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.844751 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.844762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.847826 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.847878 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.847892 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.847962 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.848963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.848987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.848997 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.851184 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.851237 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.851277 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce"} Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.851374 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.852225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.852267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:29 crc kubenswrapper[4970]: I1209 12:06:29.852276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.856337 4970 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a" exitCode=0 Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.856450 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a"} Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.856882 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.858295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.858331 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.858346 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.863052 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc"} Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.863152 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.863149 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.863214 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.863547 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.865157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.865272 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.865302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.864958 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.866706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.866731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.867676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.867737 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:30 crc kubenswrapper[4970]: I1209 12:06:30.867762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.001842 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.003197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.003291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.003314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.003351 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.814826 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.819891 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873273 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93"} Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873330 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566"} Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197"} Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873360 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780"} Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873376 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873408 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873442 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.873486 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874807 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874848 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:31 crc kubenswrapper[4970]: I1209 12:06:31.874850 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.087350 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.683377 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.879574 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1"} Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.879637 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.879687 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.879755 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880835 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880870 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880883 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880956 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.880972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.881001 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:32 crc kubenswrapper[4970]: I1209 12:06:32.881018 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.206598 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.885428 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.886115 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.886683 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.887143 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.887149 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.887200 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.887181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.888037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.888135 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.888153 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.888163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:33 crc kubenswrapper[4970]: I1209 12:06:33.888058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.262669 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.887215 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.887268 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.888118 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.888143 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.888152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.888242 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.888292 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:34 crc kubenswrapper[4970]: I1209 12:06:34.888302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:35 crc kubenswrapper[4970]: I1209 12:06:35.270174 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 09 12:06:35 crc kubenswrapper[4970]: I1209 12:06:35.270487 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:35 crc kubenswrapper[4970]: I1209 12:06:35.271607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:35 crc kubenswrapper[4970]: I1209 12:06:35.271644 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:35 crc kubenswrapper[4970]: I1209 12:06:35.271656 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:37 crc kubenswrapper[4970]: I1209 12:06:37.263120 4970 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 12:06:37 crc kubenswrapper[4970]: I1209 12:06:37.263214 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:06:37 crc kubenswrapper[4970]: E1209 12:06:37.900277 4970 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 12:06:38 crc kubenswrapper[4970]: I1209 12:06:38.481567 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 12:06:38 crc kubenswrapper[4970]: I1209 12:06:38.481867 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:38 crc kubenswrapper[4970]: I1209 12:06:38.483496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:38 crc kubenswrapper[4970]: I1209 12:06:38.483555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:38 crc kubenswrapper[4970]: I1209 12:06:38.483588 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.168060 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.168135 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.647009 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.647195 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.648283 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.648336 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.648348 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.652639 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.758028 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.898733 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.899874 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.899916 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:39 crc kubenswrapper[4970]: I1209 12:06:39.899926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:40 crc kubenswrapper[4970]: E1209 12:06:40.228237 4970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187f8a9d7091e4f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:06:27.755631864 +0000 UTC m=+0.316112935,LastTimestamp:2025-12-09 12:06:27.755631864 +0000 UTC m=+0.316112935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:06:40 crc kubenswrapper[4970]: W1209 12:06:40.605062 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 12:06:40 crc kubenswrapper[4970]: I1209 12:06:40.605173 4970 trace.go:236] Trace[1142679600]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:06:30.603) (total time: 10001ms): Dec 09 12:06:40 crc kubenswrapper[4970]: Trace[1142679600]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:06:40.605) Dec 09 12:06:40 crc kubenswrapper[4970]: Trace[1142679600]: [10.001603254s] [10.001603254s] END Dec 09 12:06:40 crc kubenswrapper[4970]: E1209 12:06:40.605196 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 12:06:40 crc kubenswrapper[4970]: E1209 12:06:40.764881 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 09 12:06:40 crc kubenswrapper[4970]: W1209 12:06:40.948189 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 12:06:40 crc kubenswrapper[4970]: I1209 12:06:40.948299 4970 trace.go:236] Trace[562573103]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:06:30.946) (total time: 10001ms): Dec 09 12:06:40 crc kubenswrapper[4970]: Trace[562573103]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:06:40.948) Dec 09 12:06:40 crc kubenswrapper[4970]: Trace[562573103]: [10.001737864s] [10.001737864s] END Dec 09 12:06:40 crc kubenswrapper[4970]: E1209 12:06:40.948323 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 12:06:40 crc kubenswrapper[4970]: W1209 12:06:40.972189 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 09 12:06:40 crc kubenswrapper[4970]: I1209 12:06:40.972320 4970 trace.go:236] Trace[1860405570]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:06:30.970) (total time: 10001ms): Dec 09 12:06:40 crc kubenswrapper[4970]: Trace[1860405570]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:06:40.972) Dec 09 12:06:40 crc kubenswrapper[4970]: Trace[1860405570]: [10.001355579s] [10.001355579s] END Dec 09 12:06:40 crc kubenswrapper[4970]: E1209 12:06:40.972350 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 12:06:41 crc kubenswrapper[4970]: E1209 12:06:41.004874 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 09 12:06:41 crc kubenswrapper[4970]: I1209 12:06:41.790427 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 12:06:41 crc kubenswrapper[4970]: I1209 12:06:41.790522 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 12:06:41 crc kubenswrapper[4970]: I1209 12:06:41.801776 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 12:06:41 crc kubenswrapper[4970]: I1209 12:06:41.801836 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.215599 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.215830 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.217520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.217570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.217589 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.221880 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.909445 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.911022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.911144 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:43 crc kubenswrapper[4970]: I1209 12:06:43.911174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4970]: I1209 12:06:44.205511 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:44 crc kubenswrapper[4970]: I1209 12:06:44.206668 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:44 crc kubenswrapper[4970]: I1209 12:06:44.206733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:44 crc kubenswrapper[4970]: I1209 12:06:44.206750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:44 crc kubenswrapper[4970]: I1209 12:06:44.206787 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:44 crc kubenswrapper[4970]: E1209 12:06:44.211332 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.107687 4970 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.758803 4970 apiserver.go:52] "Watching apiserver" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.762809 4970 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.763194 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.763763 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.763823 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.764156 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:45 crc kubenswrapper[4970]: E1209 12:06:45.764294 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.764329 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:45 crc kubenswrapper[4970]: E1209 12:06:45.764366 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.764431 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.764964 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:45 crc kubenswrapper[4970]: E1209 12:06:45.765074 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768334 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768445 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768507 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768716 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768772 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768779 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768865 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.768881 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.769101 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.785195 4970 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.809376 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.827978 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.845376 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.860962 4970 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.861530 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.875562 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.886446 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:45 crc kubenswrapper[4970]: I1209 12:06:45.895495 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.353397 4970 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.803364 4970 trace.go:236] Trace[1680247394]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 12:06:32.104) (total time: 14698ms): Dec 09 12:06:46 crc kubenswrapper[4970]: Trace[1680247394]: ---"Objects listed" error: 14698ms (12:06:46.803) Dec 09 12:06:46 crc kubenswrapper[4970]: Trace[1680247394]: [14.698842837s] [14.698842837s] END Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.803406 4970 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.811784 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.812024 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.815903 4970 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.848857 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.851329 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55314->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.851402 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55314->192.168.126.11:17697: read: connection reset by peer" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.851418 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55308->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.851511 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:55308->192.168.126.11:17697: read: connection reset by peer" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.851983 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.852050 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.856965 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.864586 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.870446 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.878215 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.889660 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.900619 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.911216 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916117 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916145 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916162 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916177 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916192 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916206 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916239 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916268 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916284 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916297 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916311 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916325 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916341 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916354 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916369 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916384 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916398 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916415 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916447 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916461 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916491 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916506 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916521 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916536 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916563 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916577 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916592 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916618 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916634 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916649 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916696 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916692 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916712 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916852 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916889 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916926 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916961 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.916992 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917022 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917053 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917084 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917115 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917144 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917218 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917260 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917286 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917329 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917373 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917487 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917526 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917652 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917674 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917716 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917727 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917765 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917761 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917913 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917919 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.917998 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918069 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918192 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918217 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918120 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918343 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918414 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918428 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918478 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918492 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918563 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918597 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.918726 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:06:47.41871144 +0000 UTC m=+19.979192491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918871 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918898 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918898 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.918981 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919059 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919086 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919084 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919108 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919084 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919200 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919224 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919285 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919471 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919531 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919542 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919658 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919705 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919730 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.919770 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.920304 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.921108 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.921650 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922143 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922628 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922662 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922694 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.921571 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.921701 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922757 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922038 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922544 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922858 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.922919 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.923011 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.923044 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.923472 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.924549 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.924548 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.924626 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.924632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.924897 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.925268 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926575 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926635 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926672 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926704 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926729 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926755 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926784 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926820 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926842 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926869 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926897 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926920 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926948 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.926978 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927003 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927029 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927056 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927085 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927108 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927135 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927162 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927184 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927211 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927239 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927278 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927303 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927330 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927357 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927379 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927407 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927434 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927459 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927489 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927519 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927546 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927573 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927600 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927632 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927656 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927695 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927728 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927751 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927783 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927812 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927841 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927866 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927895 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927923 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927948 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.927977 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928006 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928029 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928057 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928084 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928111 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928135 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928163 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928191 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928215 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928439 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928474 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928691 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928720 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928748 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928775 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928813 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928852 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928882 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928906 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928950 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.928984 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929011 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929042 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929073 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929096 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929125 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929172 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929203 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929228 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929270 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929302 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929328 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929358 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929387 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929418 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929444 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929475 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929504 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929531 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929560 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929588 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929612 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929643 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929895 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929962 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.929995 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930036 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930063 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930094 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930124 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930151 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930173 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930195 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930218 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930237 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930281 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930311 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930329 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930351 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930371 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930390 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930407 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930429 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930450 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930472 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930499 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930529 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930560 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930592 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930623 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930657 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.930653 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931448 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931463 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931506 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931544 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931571 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931604 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931637 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931662 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931691 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931721 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931748 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931739 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931777 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931806 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931836 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931861 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.931961 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.932017 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.932421 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.932443 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.932998 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933085 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933143 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935321 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935442 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935482 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935517 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935553 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935585 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935613 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933205 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933522 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933741 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933831 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935732 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933953 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.934076 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.934632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.934688 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935148 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935760 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935175 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935392 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.935508 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.933844 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936112 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936225 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936310 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936389 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936404 4970 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936490 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936552 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936586 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936649 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936815 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936834 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936882 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936896 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936911 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936952 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936975 4970 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936994 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937014 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937036 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937052 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937066 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937079 4970 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937098 4970 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937112 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937126 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937141 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937179 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937193 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937208 4970 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937222 4970 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937239 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937272 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937286 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937313 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937327 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937341 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937355 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937375 4970 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937390 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937404 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937418 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937435 4970 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937449 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937476 4970 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937496 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937510 4970 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937525 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937539 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937557 4970 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937571 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937657 4970 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937677 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937699 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937716 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937730 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937744 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937762 4970 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937776 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937792 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938079 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938100 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938114 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938128 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938145 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938160 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938174 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938189 4970 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938206 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938220 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938234 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938270 4970 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938284 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938298 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938312 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938328 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938341 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938355 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938369 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938387 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938401 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938416 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938430 4970 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938447 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938465 4970 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938478 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938494 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938509 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938522 4970 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.939459 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.953497 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.953605 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956902 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc" exitCode=255 Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.957267 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.958181 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936521 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936882 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.936911 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.958280 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937084 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937317 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937391 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937506 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.937556 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938013 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938148 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938179 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938421 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938572 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.938659 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938836 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.938979 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.939738 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.940003 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.940411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.940761 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.941172 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.941234 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.941536 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.941894 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.942191 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.942865 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.943008 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.943308 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.943531 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.943585 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.943978 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.951769 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.951817 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952041 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952232 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952298 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952354 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952590 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952643 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952729 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952840 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.952845 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.953095 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.953410 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.953440 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.954196 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.954320 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.954696 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.954938 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955024 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955188 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955196 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955217 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955393 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955406 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955506 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955795 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955823 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955832 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955968 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.955999 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956055 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956141 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956304 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956321 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956631 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956646 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956666 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956668 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.956683 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.957072 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.957781 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.957886 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.958561 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.959640 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.959925 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.960914 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.960974 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.961262 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.962458 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.963037 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.965785 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.966465 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.966895 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.967133 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.967343 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.967382 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.967618 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.968368 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.967858 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.967875 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.968056 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.968105 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968420 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968471 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.968506 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968528 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:47.46851044 +0000 UTC m=+20.028991501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.958350 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc"} Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968555 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:47.468543131 +0000 UTC m=+20.029024182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968593 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:47.468585602 +0000 UTC m=+20.029066653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.966650 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968775 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968786 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:46 crc kubenswrapper[4970]: E1209 12:06:46.968808 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:47.468801828 +0000 UTC m=+20.029282879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.968902 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.968966 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.969341 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.969553 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.971240 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.971582 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.971780 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.972399 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.972746 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.972787 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.973110 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.973277 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.973804 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.974205 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.976151 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.976476 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.977518 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.977781 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.982656 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.988735 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.989409 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.992083 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:46 crc kubenswrapper[4970]: I1209 12:06:46.993700 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.005400 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.007488 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.010677 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.015939 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.018521 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.027883 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039263 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039307 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039364 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039376 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039385 4970 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039393 4970 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039401 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039411 4970 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039419 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039427 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039435 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039443 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039453 4970 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039461 4970 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039469 4970 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039498 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039507 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039515 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039523 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039531 4970 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039543 4970 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039563 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039572 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039580 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039589 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039597 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039605 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039614 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039623 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039631 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039640 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039654 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039662 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039670 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039678 4970 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039687 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039703 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039711 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039721 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039729 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039743 4970 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039758 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039766 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039774 4970 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039783 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039792 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039802 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039812 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039821 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039830 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039838 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039846 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039857 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039865 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039874 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039881 4970 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039890 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039899 4970 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039907 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039915 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039925 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039933 4970 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039941 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039950 4970 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039958 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039970 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039979 4970 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039987 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.039995 4970 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040005 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040013 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040021 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040029 4970 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040037 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040045 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040052 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040061 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040069 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040076 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040084 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040091 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040099 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040107 4970 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040114 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040122 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040130 4970 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040137 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040144 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040152 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040160 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040169 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040177 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040185 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040193 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040200 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040208 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040217 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040224 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040232 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040240 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040259 4970 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040268 4970 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040276 4970 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040284 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040425 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040466 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040479 4970 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040489 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040498 4970 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040506 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040514 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040522 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040530 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040537 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040545 4970 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040552 4970 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040560 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040568 4970 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.040579 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.042190 4970 scope.go:117] "RemoveContainer" containerID="ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.042841 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.050572 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.073324 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.085429 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.095042 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.105077 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.116567 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.129431 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.146961 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.165730 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.283016 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 12:06:47 crc kubenswrapper[4970]: W1209 12:06:47.291887 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-243efc64c4012d72fefee1d3235fad5327a2a411d51d3eae497a2ea18952e148 WatchSource:0}: Error finding container 243efc64c4012d72fefee1d3235fad5327a2a411d51d3eae497a2ea18952e148: Status 404 returned error can't find the container with id 243efc64c4012d72fefee1d3235fad5327a2a411d51d3eae497a2ea18952e148 Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.303528 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 12:06:47 crc kubenswrapper[4970]: W1209 12:06:47.313951 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-d74f0e24d2469012398faba66edfbd8da8cf325cfe8d24d816b2798f9a498c67 WatchSource:0}: Error finding container d74f0e24d2469012398faba66edfbd8da8cf325cfe8d24d816b2798f9a498c67: Status 404 returned error can't find the container with id d74f0e24d2469012398faba66edfbd8da8cf325cfe8d24d816b2798f9a498c67 Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.445663 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.445849 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:06:48.445830556 +0000 UTC m=+21.006311607 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.546591 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.546945 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.546972 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.547000 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.546823 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547044 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547070 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547127 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547149 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547162 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547186 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547130 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:48.547108571 +0000 UTC m=+21.107589622 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547229 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:48.547212024 +0000 UTC m=+21.107693155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547238 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547264 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:48.547238554 +0000 UTC m=+21.107719705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.547291 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:48.547280695 +0000 UTC m=+21.107761746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.812317 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.812415 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.812553 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:47 crc kubenswrapper[4970]: E1209 12:06:47.812612 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.816085 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.816773 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.817429 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.818013 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.818535 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.818969 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.819502 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.820007 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.820678 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.821266 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.821750 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.822516 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.823142 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.823811 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.825615 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.827292 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.827812 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.828462 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.829216 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.829782 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.830933 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.831382 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.831941 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.832821 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.833474 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.834502 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.835096 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.836109 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.836562 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.837121 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.837993 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.838493 4970 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.838587 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.840591 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.841112 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.841615 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.843742 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.844583 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.845217 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.845896 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.846694 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.847928 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.848476 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.849196 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.850344 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.851420 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.851990 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.853226 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.853962 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.855651 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.856348 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.857063 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.857579 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.858360 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.859083 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.859628 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.865197 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.882225 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.901049 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.915042 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.934117 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.955113 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.960611 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"d74f0e24d2469012398faba66edfbd8da8cf325cfe8d24d816b2798f9a498c67"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.961823 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.962033 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"243efc64c4012d72fefee1d3235fad5327a2a411d51d3eae497a2ea18952e148"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.963392 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.963415 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.963424 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"21aad0c0493e5432c0a598faef95696ba7445fc21e8a6d6d9e64c3d9737b9641"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.965299 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.966643 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44"} Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.966816 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:47 crc kubenswrapper[4970]: I1209 12:06:47.978669 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.003300 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.015909 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.030647 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.049299 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.089749 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.138441 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.169981 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.170424 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-sgdqg"] Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.170767 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.174761 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-4gt4z"] Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.175021 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-nqntn"] Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.175175 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.176021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.178225 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rtdjh"] Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.178712 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.181305 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.181364 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.181662 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.181754 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.181887 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185590 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185625 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185646 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185694 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185659 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185809 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.185831 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.187035 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.189048 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.190020 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.194693 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.218992 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.236300 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252120 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252280 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81da4c74-d93e-4a7a-848a-c3539268368b-cni-binary-copy\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252317 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-k8s-cni-cncf-io\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252339 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-netns\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252362 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a283668d-a884-4d62-95e2-1f0ae672f61c-proxy-tls\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252429 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-system-cni-dir\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252453 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-cnibin\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252471 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b0916312-9225-4366-81e2-f4a34d1ae9fe-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252504 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6d4b880d-64b3-496f-9a0c-1d2a26b0e33c-hosts-file\") pod \"node-resolver-4gt4z\" (UID: \"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\") " pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252524 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrdf\" (UniqueName: \"kubernetes.io/projected/6d4b880d-64b3-496f-9a0c-1d2a26b0e33c-kube-api-access-rsrdf\") pod \"node-resolver-4gt4z\" (UID: \"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\") " pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252541 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a283668d-a884-4d62-95e2-1f0ae672f61c-mcd-auth-proxy-config\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252560 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5c5t\" (UniqueName: \"kubernetes.io/projected/81da4c74-d93e-4a7a-848a-c3539268368b-kube-api-access-s5c5t\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252574 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a283668d-a884-4d62-95e2-1f0ae672f61c-rootfs\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252652 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-cni-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252720 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b0916312-9225-4366-81e2-f4a34d1ae9fe-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252766 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-os-release\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252855 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-kubelet\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252885 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-hostroot\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252916 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-conf-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.252962 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-socket-dir-parent\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253030 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-os-release\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253069 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-system-cni-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253104 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-cnibin\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253150 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-etc-kubernetes\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253196 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-cni-bin\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253226 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-cni-multus\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253283 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwns8\" (UniqueName: \"kubernetes.io/projected/b0916312-9225-4366-81e2-f4a34d1ae9fe-kube-api-access-fwns8\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253305 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81da4c74-d93e-4a7a-848a-c3539268368b-multus-daemon-config\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253331 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253363 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-multus-certs\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.253392 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spbcn\" (UniqueName: \"kubernetes.io/projected/a283668d-a884-4d62-95e2-1f0ae672f61c-kube-api-access-spbcn\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.266384 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.281806 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.295319 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.308899 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.324176 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.337594 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.353853 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354119 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwns8\" (UniqueName: \"kubernetes.io/projected/b0916312-9225-4366-81e2-f4a34d1ae9fe-kube-api-access-fwns8\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81da4c74-d93e-4a7a-848a-c3539268368b-multus-daemon-config\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354205 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-multus-certs\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354228 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spbcn\" (UniqueName: \"kubernetes.io/projected/a283668d-a884-4d62-95e2-1f0ae672f61c-kube-api-access-spbcn\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354270 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354306 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-k8s-cni-cncf-io\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354338 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81da4c74-d93e-4a7a-848a-c3539268368b-cni-binary-copy\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354363 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-netns\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354387 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a283668d-a884-4d62-95e2-1f0ae672f61c-proxy-tls\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354411 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-cnibin\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354401 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-k8s-cni-cncf-io\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354439 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-netns\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354435 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b0916312-9225-4366-81e2-f4a34d1ae9fe-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354481 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-run-multus-certs\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354576 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-cnibin\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354646 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-tuning-conf-dir\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354702 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-system-cni-dir\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354751 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6d4b880d-64b3-496f-9a0c-1d2a26b0e33c-hosts-file\") pod \"node-resolver-4gt4z\" (UID: \"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\") " pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrdf\" (UniqueName: \"kubernetes.io/projected/6d4b880d-64b3-496f-9a0c-1d2a26b0e33c-kube-api-access-rsrdf\") pod \"node-resolver-4gt4z\" (UID: \"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\") " pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354807 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a283668d-a884-4d62-95e2-1f0ae672f61c-mcd-auth-proxy-config\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354833 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5c5t\" (UniqueName: \"kubernetes.io/projected/81da4c74-d93e-4a7a-848a-c3539268368b-kube-api-access-s5c5t\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354831 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6d4b880d-64b3-496f-9a0c-1d2a26b0e33c-hosts-file\") pod \"node-resolver-4gt4z\" (UID: \"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\") " pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354855 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a283668d-a884-4d62-95e2-1f0ae672f61c-rootfs\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354879 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-cni-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b0916312-9225-4366-81e2-f4a34d1ae9fe-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354914 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a283668d-a884-4d62-95e2-1f0ae672f61c-rootfs\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354930 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-os-release\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354957 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-kubelet\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.354978 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-hostroot\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355000 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-conf-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355045 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-socket-dir-parent\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355067 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81da4c74-d93e-4a7a-848a-c3539268368b-multus-daemon-config\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-cni-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355133 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-system-cni-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-hostroot\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355144 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81da4c74-d93e-4a7a-848a-c3539268368b-cni-binary-copy\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355175 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-conf-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355150 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-kubelet\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355196 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-system-cni-dir\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-cnibin\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355195 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b0916312-9225-4366-81e2-f4a34d1ae9fe-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355197 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-multus-socket-dir-parent\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355259 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-etc-kubernetes\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355287 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-etc-kubernetes\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355315 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-os-release\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355212 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-cnibin\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355340 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-cni-bin\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-cni-multus\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355387 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-cni-bin\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355419 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-host-var-lib-cni-multus\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355420 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81da4c74-d93e-4a7a-848a-c3539268368b-os-release\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355493 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-os-release\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355558 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a283668d-a884-4d62-95e2-1f0ae672f61c-mcd-auth-proxy-config\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355608 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b0916312-9225-4366-81e2-f4a34d1ae9fe-cni-binary-copy\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.355772 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b0916312-9225-4366-81e2-f4a34d1ae9fe-system-cni-dir\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.359780 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a283668d-a884-4d62-95e2-1f0ae672f61c-proxy-tls\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.369076 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.373559 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5c5t\" (UniqueName: \"kubernetes.io/projected/81da4c74-d93e-4a7a-848a-c3539268368b-kube-api-access-s5c5t\") pod \"multus-sgdqg\" (UID: \"81da4c74-d93e-4a7a-848a-c3539268368b\") " pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.374387 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwns8\" (UniqueName: \"kubernetes.io/projected/b0916312-9225-4366-81e2-f4a34d1ae9fe-kube-api-access-fwns8\") pod \"multus-additional-cni-plugins-nqntn\" (UID: \"b0916312-9225-4366-81e2-f4a34d1ae9fe\") " pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.374879 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spbcn\" (UniqueName: \"kubernetes.io/projected/a283668d-a884-4d62-95e2-1f0ae672f61c-kube-api-access-spbcn\") pod \"machine-config-daemon-rtdjh\" (UID: \"a283668d-a884-4d62-95e2-1f0ae672f61c\") " pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.376158 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrdf\" (UniqueName: \"kubernetes.io/projected/6d4b880d-64b3-496f-9a0c-1d2a26b0e33c-kube-api-access-rsrdf\") pod \"node-resolver-4gt4z\" (UID: \"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\") " pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.381993 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.394268 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.411395 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.430961 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.445139 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.456653 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.456797 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:06:50.456768507 +0000 UTC m=+23.017249558 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.459883 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.474377 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.486374 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-sgdqg" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.493397 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.497514 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-4gt4z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.505645 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.508406 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-nqntn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.517742 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.518351 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.519962 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.520359 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.534390 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.549088 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.558046 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.558096 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.558120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.558139 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558260 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558276 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558333 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558344 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558377 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558390 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:50.558376651 +0000 UTC m=+23.118857702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558390 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558462 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558468 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:50.558448933 +0000 UTC m=+23.118929994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558487 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:50.558480014 +0000 UTC m=+23.118961055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558527 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.558563 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:50.558553666 +0000 UTC m=+23.119034727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.565991 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sxdvn"] Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.569912 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.570624 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.573745 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.574134 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.574361 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.574575 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.574878 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.576018 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.578201 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.603267 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.620238 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.634288 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.652959 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.659266 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-netd\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.660872 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-slash\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.660942 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-node-log\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.660973 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-etc-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661054 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-systemd\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661181 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-ovn-kubernetes\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661338 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-systemd-units\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661358 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-kubelet\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661373 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-var-lib-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661438 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-env-overrides\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661455 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-netns\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661522 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovn-node-metrics-cert\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.661594 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-log-socket\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.663700 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-config\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.663768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-ovn\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.663784 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-bin\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.663860 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-script-lib\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.663876 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdl75\" (UniqueName: \"kubernetes.io/projected/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-kube-api-access-xdl75\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.663955 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.677565 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.709300 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.746076 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766487 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-etc-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-systemd\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766543 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766568 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-systemd-units\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766591 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-ovn-kubernetes\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766589 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-etc-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766609 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-kubelet\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766623 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-var-lib-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766633 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766636 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-env-overrides\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766661 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-netns\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766678 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovn-node-metrics-cert\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766695 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-log-socket\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766710 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-config\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766742 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-ovn\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766756 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-bin\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766772 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-script-lib\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766789 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdl75\" (UniqueName: \"kubernetes.io/projected/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-kube-api-access-xdl75\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766811 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766827 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-slash\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766841 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-node-log\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766855 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-netd\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766894 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-netd\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766915 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-log-socket\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767158 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-env-overrides\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767194 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-systemd-units\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767219 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-ovn-kubernetes\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767239 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-kubelet\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-var-lib-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767354 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-config\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767353 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-openvswitch\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767384 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-slash\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767405 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-ovn\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.766658 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767445 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-systemd\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767460 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-node-log\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767609 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-netns\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767689 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-bin\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.767848 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-script-lib\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.776151 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovn-node-metrics-cert\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.791141 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdl75\" (UniqueName: \"kubernetes.io/projected/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-kube-api-access-xdl75\") pod \"ovnkube-node-sxdvn\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.792047 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.807886 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.811960 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:48 crc kubenswrapper[4970]: E1209 12:06:48.812126 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.822960 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.837805 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.850395 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.864622 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.877496 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.892969 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.895941 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:48 crc kubenswrapper[4970]: W1209 12:06:48.907352 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-6d84da9642b11c6b8f28fb8bcffdd37f48498488a74c0d511a91a3f95a2b4aa3 WatchSource:0}: Error finding container 6d84da9642b11c6b8f28fb8bcffdd37f48498488a74c0d511a91a3f95a2b4aa3: Status 404 returned error can't find the container with id 6d84da9642b11c6b8f28fb8bcffdd37f48498488a74c0d511a91a3f95a2b4aa3 Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.918647 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.932506 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.949262 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.960981 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.973188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerStarted","Data":"5d7ef170c12c2c66e962607870b764bfccc47044c3738b875bd1ec3b51e41ada"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.975108 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4gt4z" event={"ID":"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c","Type":"ContainerStarted","Data":"01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.975164 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-4gt4z" event={"ID":"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c","Type":"ContainerStarted","Data":"42e3273031ddd6430c0c5fc0a0aa5b810bfc6bd8a152208ca6796cebec1294d1"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.980689 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.981469 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerStarted","Data":"65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.981724 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerStarted","Data":"ee57899d5e4d094eae9ca94ecd3c06c572d0542a869779334c24459d096bdddf"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.985779 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"6d84da9642b11c6b8f28fb8bcffdd37f48498488a74c0d511a91a3f95a2b4aa3"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.988225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.988367 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf"} Dec 09 12:06:48 crc kubenswrapper[4970]: I1209 12:06:48.988385 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"ba564bc5fdd96f7175c3efbbef60e6ba6f0772159f1ab97ecb57007597b12963"} Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.004760 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.023582 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.045204 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.061989 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.078928 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.113741 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.152490 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.197198 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.233886 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.291118 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.320882 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.355866 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.399531 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.442195 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.473619 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:49Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.811796 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.811802 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:49 crc kubenswrapper[4970]: E1209 12:06:49.812298 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:49 crc kubenswrapper[4970]: E1209 12:06:49.812360 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.991824 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4"} Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.993066 4970 generic.go:334] "Generic (PLEG): container finished" podID="b0916312-9225-4366-81e2-f4a34d1ae9fe" containerID="2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2" exitCode=0 Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.993139 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerDied","Data":"2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2"} Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.994213 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" exitCode=0 Dec 09 12:06:49 crc kubenswrapper[4970]: I1209 12:06:49.994962 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.010746 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.036553 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.050196 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.062327 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.073195 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.085494 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.102926 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.117852 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.133124 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.148624 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.164968 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.175012 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.192325 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.204365 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.230703 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.255738 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.272011 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.288005 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.299813 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.310081 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.323195 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.353190 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.394166 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.436241 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.472722 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.482556 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.482655 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:06:54.482629873 +0000 UTC m=+27.043110934 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.535643 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.564105 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.583986 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.584041 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.584073 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.584098 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584173 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584213 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:54.584200416 +0000 UTC m=+27.144681457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584311 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584328 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584340 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584367 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:54.58435872 +0000 UTC m=+27.144839771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584416 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584453 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:54.584444792 +0000 UTC m=+27.144925843 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584500 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584509 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584516 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.584535 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:06:54.584529464 +0000 UTC m=+27.145010515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.600890 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.611456 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.613070 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.613114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.613126 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.613226 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.623868 4970 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.624111 4970 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.627451 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.627495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.627515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.627533 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.627546 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.644429 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.647301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.647337 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.647353 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.647369 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.647381 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.657829 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.660343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.660368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.660378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.660393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.660404 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.673469 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.677003 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.677036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.677045 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.677059 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.677074 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.687460 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.690427 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.690462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.690471 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.690484 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.690494 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.700566 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.700684 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.702368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.702396 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.702405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.702419 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.702429 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.804889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.804938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.804948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.804964 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.804980 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.812422 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:50 crc kubenswrapper[4970]: E1209 12:06:50.812537 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.848696 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8vkz2"] Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.849194 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.851137 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.851297 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.851349 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.852344 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.865505 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.877358 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.888794 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.900969 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.909722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.909766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.909777 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.909792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.909802 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:50Z","lastTransitionTime":"2025-12-09T12:06:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.919017 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.951414 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.987628 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwqp\" (UniqueName: \"kubernetes.io/projected/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-kube-api-access-9fwqp\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.987845 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-serviceca\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.987942 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-host\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:50 crc kubenswrapper[4970]: I1209 12:06:50.992868 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:50Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.001076 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.001134 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.001147 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.001156 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.001164 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.001173 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.003001 4970 generic.go:334] "Generic (PLEG): container finished" podID="b0916312-9225-4366-81e2-f4a34d1ae9fe" containerID="afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60" exitCode=0 Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.003121 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerDied","Data":"afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.011409 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.011452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.011464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.011480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.011491 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.033328 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.081400 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.088954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwqp\" (UniqueName: \"kubernetes.io/projected/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-kube-api-access-9fwqp\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.088999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-serviceca\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.089050 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-host\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.089356 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-host\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.090084 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-serviceca\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.111199 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.114729 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.114766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.114779 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.114839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.114852 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.140312 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwqp\" (UniqueName: \"kubernetes.io/projected/4bd9c275-1bfb-4080-8279-3ad903c7fd2f-kube-api-access-9fwqp\") pod \"node-ca-8vkz2\" (UID: \"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\") " pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.162593 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8vkz2" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.175710 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: W1209 12:06:51.180034 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bd9c275_1bfb_4080_8279_3ad903c7fd2f.slice/crio-2c9894e477650f997c72809c0a24e4c327a1f21ca72d990125540b507952da00 WatchSource:0}: Error finding container 2c9894e477650f997c72809c0a24e4c327a1f21ca72d990125540b507952da00: Status 404 returned error can't find the container with id 2c9894e477650f997c72809c0a24e4c327a1f21ca72d990125540b507952da00 Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.213695 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.217730 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.217776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.217787 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.217804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.217815 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.254019 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.295889 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.320548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.320592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.320604 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.320621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.320671 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.330817 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.376345 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.412986 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.423013 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.423047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.423055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.423072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.423080 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.452147 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.491730 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.525123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.525182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.525196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.525214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.525226 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.535600 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.575423 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.612684 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.627415 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.627456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.627467 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.627484 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.627495 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.652073 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.693711 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.729418 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.729451 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.729462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.729477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.729489 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.734160 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.776385 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.812208 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.812311 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:51 crc kubenswrapper[4970]: E1209 12:06:51.812341 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:51 crc kubenswrapper[4970]: E1209 12:06:51.812462 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.812857 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.830991 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.831025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.831033 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.831046 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.831054 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.853346 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.891004 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.933752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.933804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.933821 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.933846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.933864 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:51Z","lastTransitionTime":"2025-12-09T12:06:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:51 crc kubenswrapper[4970]: I1209 12:06:51.938475 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:51Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.006771 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8vkz2" event={"ID":"4bd9c275-1bfb-4080-8279-3ad903c7fd2f","Type":"ContainerStarted","Data":"4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.006817 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8vkz2" event={"ID":"4bd9c275-1bfb-4080-8279-3ad903c7fd2f","Type":"ContainerStarted","Data":"2c9894e477650f997c72809c0a24e4c327a1f21ca72d990125540b507952da00"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.008994 4970 generic.go:334] "Generic (PLEG): container finished" podID="b0916312-9225-4366-81e2-f4a34d1ae9fe" containerID="d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e" exitCode=0 Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.009169 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerDied","Data":"d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.026371 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.035893 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.035924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.035933 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.035949 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.035959 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.047460 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.058562 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.091328 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.135726 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.138147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.138191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.138203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.138219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.138231 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.171486 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.215212 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.241365 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.241402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.241413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.241427 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.241438 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.251086 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.295133 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.333830 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.343465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.343520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.343532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.343550 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.343566 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.371377 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.421786 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.446278 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.446321 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.446332 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.446350 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.446361 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.457796 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.497472 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.535614 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.548586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.548624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.548634 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.548650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.548661 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.585032 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.651629 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.651676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.651687 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.651704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.651717 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.658164 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.673180 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.696067 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.733771 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.754498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.754540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.754552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.754570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.754580 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.776562 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.811747 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.811811 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:52 crc kubenswrapper[4970]: E1209 12:06:52.811995 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.857036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.857100 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.857134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.857157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.857172 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.858631 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.893196 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.934838 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.960488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.960529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.960539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.960554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.960564 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:52Z","lastTransitionTime":"2025-12-09T12:06:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:52 crc kubenswrapper[4970]: I1209 12:06:52.976064 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:52Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.013449 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.016012 4970 generic.go:334] "Generic (PLEG): container finished" podID="b0916312-9225-4366-81e2-f4a34d1ae9fe" containerID="614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef" exitCode=0 Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.016102 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerDied","Data":"614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.020120 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.050844 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.062751 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.062796 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.062807 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.062825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.062838 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.090927 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.133523 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.165699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.165779 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.165797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.165823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.165839 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.171516 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.211965 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.253010 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.269164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.269197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.269208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.269227 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.269239 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.293457 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.332182 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.372361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.372395 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.372405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.372421 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.372432 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.372488 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.413933 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.457427 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.474134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.474158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.474166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.474178 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.474187 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.491776 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.530031 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.573324 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.576852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.576880 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.576888 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.576900 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.576909 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.627844 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.656362 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.678487 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.678521 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.678531 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.678545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.678554 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.691617 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.730437 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:53Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.781716 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.781757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.781768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.781783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.781795 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.812199 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.812307 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:53 crc kubenswrapper[4970]: E1209 12:06:53.812360 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:53 crc kubenswrapper[4970]: E1209 12:06:53.812485 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.884364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.884405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.884422 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.884445 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.884461 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.987049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.987095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.987110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.987131 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:53 crc kubenswrapper[4970]: I1209 12:06:53.987143 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:53Z","lastTransitionTime":"2025-12-09T12:06:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.027327 4970 generic.go:334] "Generic (PLEG): container finished" podID="b0916312-9225-4366-81e2-f4a34d1ae9fe" containerID="d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb" exitCode=0 Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.027377 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerDied","Data":"d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.043459 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.057881 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.075953 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.089922 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.089966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.089978 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.089995 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.090007 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.092638 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.108021 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.123032 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.136305 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.155945 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.179891 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.192765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.192827 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.192848 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.192873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.192889 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.197124 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.209920 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.224359 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.252317 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.295700 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.295736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.295745 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.295759 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.295768 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.296444 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.330988 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:54Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.399105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.399153 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.399164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.399181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.399193 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.501743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.501784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.501792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.501808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.501818 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.535613 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.535892 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:02.535861866 +0000 UTC m=+35.096342927 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.605347 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.605410 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.605428 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.605454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.605483 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.636906 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.636966 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.636994 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.637025 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637116 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637163 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637183 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637195 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637192 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637450 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.638237 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637196 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.637224 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:02.637196602 +0000 UTC m=+35.197677683 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.638409 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:02.638390623 +0000 UTC m=+35.198871704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.638431 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:02.638419354 +0000 UTC m=+35.198900445 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.638453 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:02.638443055 +0000 UTC m=+35.198924136 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.708819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.708879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.708902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.708931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.708953 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.811239 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.811294 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.811301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.811314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.811323 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.811621 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:54 crc kubenswrapper[4970]: E1209 12:06:54.811777 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.914036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.914095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.914107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.914131 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:54 crc kubenswrapper[4970]: I1209 12:06:54.914149 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:54Z","lastTransitionTime":"2025-12-09T12:06:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.016844 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.016881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.016894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.016911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.016927 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.119590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.119623 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.119632 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.119649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.119659 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.222570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.222611 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.222622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.222641 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.222652 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.326089 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.326174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.326194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.326219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.326237 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.428205 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.428263 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.428275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.428291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.428303 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.531694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.531784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.531814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.531846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.531871 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.634072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.634126 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.634141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.634163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.634178 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.736976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.737025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.737038 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.737056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.737069 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.812010 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.812085 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:55 crc kubenswrapper[4970]: E1209 12:06:55.812499 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:55 crc kubenswrapper[4970]: E1209 12:06:55.812537 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.839522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.839569 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.839581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.839596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.839609 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.943294 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.943373 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.943396 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.943423 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:55 crc kubenswrapper[4970]: I1209 12:06:55.943441 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:55Z","lastTransitionTime":"2025-12-09T12:06:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.038390 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.038879 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.042120 4970 generic.go:334] "Generic (PLEG): container finished" podID="b0916312-9225-4366-81e2-f4a34d1ae9fe" containerID="b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3" exitCode=0 Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.042150 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerDied","Data":"b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.048600 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.048646 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.048657 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.048673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.048688 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.058065 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.073379 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.074834 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.091112 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.106996 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.118560 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.133418 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.147037 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.152209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.152243 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.152277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.152293 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.152304 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.161370 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.176404 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.189169 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.208788 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.221548 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.231627 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.245102 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.254060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.254098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.254107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.254121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.254133 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.263665 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.274688 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.292276 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.304752 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.315341 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.329345 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.351449 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.356234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.356266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.356274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.356286 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.356294 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.364006 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.379118 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.391871 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.409156 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.418145 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.431117 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.442904 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.455806 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.458482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.458531 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.458543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.458563 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.458575 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.469544 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:56Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.561304 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.561362 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.561378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.561400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.561413 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.664365 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.665257 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.665284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.665302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.665313 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.767348 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.767384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.767394 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.767409 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.767420 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.811897 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:56 crc kubenswrapper[4970]: E1209 12:06:56.812039 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.869461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.869499 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.869509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.869522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.869531 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.971901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.971954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.971965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.971982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:56 crc kubenswrapper[4970]: I1209 12:06:56.971996 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:56Z","lastTransitionTime":"2025-12-09T12:06:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.048794 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" event={"ID":"b0916312-9225-4366-81e2-f4a34d1ae9fe","Type":"ContainerStarted","Data":"33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.049086 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.049385 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.064906 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.071972 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.073700 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.073742 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.073755 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.073772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.073784 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.078731 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.090593 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.102745 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.112703 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.127120 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.139067 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.152074 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.165753 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.176387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.176425 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.176436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.176452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.176463 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.178001 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.197111 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.210624 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.219591 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.231658 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.251317 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.265955 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.278740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.278767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.278780 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.278795 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.278805 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.287569 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.300198 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.318586 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.330552 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.346291 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.358220 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.370597 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.381442 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.381488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.381501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.381523 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.381537 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.385194 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.397792 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.414801 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.434325 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.445221 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.456481 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.466329 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.483954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.484078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.484091 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.484108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.484119 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.586165 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.586270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.586283 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.586299 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.586309 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.689444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.689493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.689504 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.689519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.689529 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.791727 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.791762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.791773 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.791791 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.791804 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.812427 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:57 crc kubenswrapper[4970]: E1209 12:06:57.812561 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.812598 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:57 crc kubenswrapper[4970]: E1209 12:06:57.812767 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.825158 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.854493 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.866497 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.881696 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.894487 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.894533 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.894548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.894567 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.894579 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:57Z","lastTransitionTime":"2025-12-09T12:06:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.907690 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.931834 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.943588 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.956359 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.973009 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.986795 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:57 crc kubenswrapper[4970]: I1209 12:06:57.997315 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:57Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.002829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.002885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.002899 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.002942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.002956 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.017320 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.030126 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.045152 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.053646 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/0.log" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.056468 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11" exitCode=1 Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.056516 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.057490 4970 scope.go:117] "RemoveContainer" containerID="844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.061031 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.077645 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.091306 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.105648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.105694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.105706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.105724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.105736 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.109369 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.120961 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.136742 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.171360 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.194980 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.207991 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.208027 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.208039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.208056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.208069 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.213401 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.227345 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.246182 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.269030 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:06:57.383733 6234 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:06:57.383763 6234 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:06:57.383775 6234 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:06:57.383780 6234 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:06:57.383790 6234 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:06:57.383800 6234 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:06:57.383826 6234 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:06:57.383842 6234 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:06:57.383833 6234 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:06:57.383866 6234 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:06:57.383828 6234 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:06:57.383902 6234 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 12:06:57.383906 6234 factory.go:656] Stopping watch factory\\\\nI1209 12:06:57.383912 6234 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:06:57.383918 6234 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.281432 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.295424 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.307399 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.309875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.309908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.309920 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.309936 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.309945 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.320364 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.413191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.413214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.413221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.413233 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.413241 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.515209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.515270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.515281 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.515300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.515310 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.617170 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.617198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.617208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.617221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.617229 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.719543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.719591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.719602 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.719618 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.719627 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.811893 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:06:58 crc kubenswrapper[4970]: E1209 12:06:58.812013 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.822363 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.822393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.822402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.822449 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.822458 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.924465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.924513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.924529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.924554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:58 crc kubenswrapper[4970]: I1209 12:06:58.924569 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:58Z","lastTransitionTime":"2025-12-09T12:06:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.026801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.026841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.026853 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.026871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.026884 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.063545 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/0.log" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.070116 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.070463 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.087666 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.101531 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.117229 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.129277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.129339 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.129361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.129422 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.129446 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.134983 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.148012 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.161441 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.172006 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.174924 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.192462 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.214842 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.231655 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.232758 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.232904 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.233017 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.233154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.233289 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.246830 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.258109 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.276143 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.305340 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:06:57.383733 6234 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:06:57.383763 6234 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:06:57.383775 6234 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:06:57.383780 6234 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:06:57.383790 6234 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:06:57.383800 6234 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:06:57.383826 6234 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:06:57.383842 6234 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:06:57.383833 6234 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:06:57.383866 6234 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:06:57.383828 6234 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:06:57.383902 6234 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 12:06:57.383906 6234 factory.go:656] Stopping watch factory\\\\nI1209 12:06:57.383912 6234 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:06:57.383918 6234 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.319131 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.336461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.336498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.336508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.336526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.336536 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.339745 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.351630 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.364735 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.376220 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.392178 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.421226 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:06:57.383733 6234 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:06:57.383763 6234 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:06:57.383775 6234 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:06:57.383780 6234 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:06:57.383790 6234 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:06:57.383800 6234 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:06:57.383826 6234 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:06:57.383842 6234 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:06:57.383833 6234 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:06:57.383866 6234 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:06:57.383828 6234 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:06:57.383902 6234 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 12:06:57.383906 6234 factory.go:656] Stopping watch factory\\\\nI1209 12:06:57.383912 6234 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:06:57.383918 6234 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.433320 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.438197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.438240 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.438271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.438288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.438299 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.447155 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.464924 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.477115 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.491646 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.507487 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.519323 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.529461 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.543038 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.543093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.543105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.543124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.543139 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.545119 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:59Z is after 2025-08-24T17:21:41Z" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.646780 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.646844 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.646868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.646896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.646917 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.749389 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.750349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.750507 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.750679 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.750803 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.812306 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.812428 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:06:59 crc kubenswrapper[4970]: E1209 12:06:59.812595 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:06:59 crc kubenswrapper[4970]: E1209 12:06:59.812795 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.853214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.853267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.853278 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.853292 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.853302 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.955088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.955118 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.955127 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.955140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:06:59 crc kubenswrapper[4970]: I1209 12:06:59.955148 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:06:59Z","lastTransitionTime":"2025-12-09T12:06:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.057414 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.057477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.057487 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.057501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.057509 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.075575 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/1.log" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.076342 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/0.log" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.080131 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17" exitCode=1 Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.080166 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.080208 4970 scope.go:117] "RemoveContainer" containerID="844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.082107 4970 scope.go:117] "RemoveContainer" containerID="27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17" Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.082421 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.101312 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:06:57.383733 6234 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:06:57.383763 6234 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:06:57.383775 6234 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:06:57.383780 6234 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:06:57.383790 6234 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:06:57.383800 6234 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:06:57.383826 6234 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:06:57.383842 6234 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:06:57.383833 6234 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:06:57.383866 6234 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:06:57.383828 6234 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:06:57.383902 6234 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 12:06:57.383906 6234 factory.go:656] Stopping watch factory\\\\nI1209 12:06:57.383912 6234 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:06:57.383918 6234 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.112289 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.126703 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.139773 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.152367 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.159818 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.159849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.159857 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.159869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.159878 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.170978 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.186534 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.200861 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.211646 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.227613 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.241208 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.254052 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.261958 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.262024 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.262047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.262070 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.262087 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.267460 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.280212 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.299559 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.364394 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.364456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.364472 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.364495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.364512 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.466924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.466987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.467004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.467026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.467045 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.569050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.569086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.569094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.569108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.569116 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.671441 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.671480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.671490 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.671505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.671517 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.715526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.715586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.715602 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.715624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.715640 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.732739 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.736720 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.736782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.736800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.736824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.736843 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.753511 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.758140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.758192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.758203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.758219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.758230 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.775360 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.778971 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.779020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.779037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.779059 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.779076 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.790920 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv"] Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.791811 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.791805 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.793492 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.794177 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.795909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.795948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.795960 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.795976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.795986 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.809620 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.809778 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.811565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.811600 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.811609 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.811628 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.811639 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.811649 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:00 crc kubenswrapper[4970]: E1209 12:07:00.811866 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.812316 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.837623 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://844d974f239593a49555f9f1bff833cd8940572863a01512ae98559085741c11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"message\\\":\\\"g reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 12:06:57.383733 6234 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:06:57.383763 6234 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:06:57.383775 6234 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:06:57.383780 6234 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:06:57.383790 6234 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:06:57.383800 6234 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:06:57.383826 6234 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:06:57.383842 6234 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:06:57.383833 6234 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:06:57.383866 6234 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:06:57.383828 6234 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:06:57.383902 6234 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 12:06:57.383906 6234 factory.go:656] Stopping watch factory\\\\nI1209 12:06:57.383912 6234 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:06:57.383918 6234 ovnkube.go:599] Stopped ovnkube\\\\nI12\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.850799 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.867934 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.883139 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.895464 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.897807 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.897943 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.898038 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.898118 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klc5v\" (UniqueName: \"kubernetes.io/projected/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-kube-api-access-klc5v\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.906984 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.913895 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.913931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.913943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.913959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.913971 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:00Z","lastTransitionTime":"2025-12-09T12:07:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.920762 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.934075 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.947036 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.963790 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.974269 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:00Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.999409 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.999467 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.999519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:00 crc kubenswrapper[4970]: I1209 12:07:00.999599 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klc5v\" (UniqueName: \"kubernetes.io/projected/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-kube-api-access-klc5v\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.000301 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.000633 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.003807 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.005235 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.016905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.016943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.016952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.016965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.016974 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.019147 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klc5v\" (UniqueName: \"kubernetes.io/projected/c82e1dc9-1c79-493a-96aa-8f710bb2d6c7-kube-api-access-klc5v\") pod \"ovnkube-control-plane-749d76644c-lmfjv\" (UID: \"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.023068 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.036417 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.047884 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.084802 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/1.log" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.088848 4970 scope.go:117] "RemoveContainer" containerID="27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17" Dec 09 12:07:01 crc kubenswrapper[4970]: E1209 12:07:01.088990 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.108668 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.113285 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.119785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.119830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.119846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.119868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.119884 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.130768 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.150196 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.168421 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.189312 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.209172 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.222030 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.223615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.223680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.223698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.223724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.223741 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.238812 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.253584 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.270885 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.284767 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.299205 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.313797 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.327240 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.327694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.327727 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.327736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.327753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.327764 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.344401 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.357375 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.429889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.430140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.430206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.430320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.430406 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.532685 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.532727 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.532740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.532758 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.532769 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.635884 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.635944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.635970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.636004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.636028 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.741068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.741368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.741449 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.741525 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.741595 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.812541 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:01 crc kubenswrapper[4970]: E1209 12:07:01.812722 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.812859 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:01 crc kubenswrapper[4970]: E1209 12:07:01.813039 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.843998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.844050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.844066 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.844090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.844106 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.912100 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cp4b2"] Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.912902 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:01 crc kubenswrapper[4970]: E1209 12:07:01.913019 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.932368 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.946835 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.946867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.946879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.946898 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.946909 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:01Z","lastTransitionTime":"2025-12-09T12:07:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.950030 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.965642 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:01 crc kubenswrapper[4970]: I1209 12:07:01.982999 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.001388 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:01Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.010847 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.010896 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jkk\" (UniqueName: \"kubernetes.io/projected/5e10a28a-08f5-4679-9d90-532322e9e87f-kube-api-access-f9jkk\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.014368 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.034173 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.046213 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.049712 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.049745 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.049753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.049768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.049779 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.061607 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.074560 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.086810 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.094079 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" event={"ID":"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7","Type":"ContainerStarted","Data":"f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.094138 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" event={"ID":"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7","Type":"ContainerStarted","Data":"92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.094160 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" event={"ID":"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7","Type":"ContainerStarted","Data":"db70b0b9b95ae01ea9da44ec652c7bd4b26d7d2cb9d322749e7ecf3155e7a45a"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.106058 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.111642 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.111731 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jkk\" (UniqueName: \"kubernetes.io/projected/5e10a28a-08f5-4679-9d90-532322e9e87f-kube-api-access-f9jkk\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.112080 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.112131 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:07:02.612114763 +0000 UTC m=+35.172595814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.128681 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.131839 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jkk\" (UniqueName: \"kubernetes.io/projected/5e10a28a-08f5-4679-9d90-532322e9e87f-kube-api-access-f9jkk\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.144868 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.152709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.152747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.152760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.152779 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.152789 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.163274 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.177206 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.188313 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.204022 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.218798 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.231267 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.243823 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.255216 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.255291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.255306 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.255328 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.255340 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.258055 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.269545 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.279916 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.296428 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.307899 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.334467 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.352287 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.356865 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.356914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.356929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.356947 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.356959 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.374042 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.387771 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.405580 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.421160 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.436959 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.448066 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:02Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.459600 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.459632 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.459641 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.459654 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.459664 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.561646 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.561694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.561711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.561733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.561749 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.617160 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.617337 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.617463 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.617520 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:07:03.617502452 +0000 UTC m=+36.177983503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.617745 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:18.617719287 +0000 UTC m=+51.178200338 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.664744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.664984 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.665058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.665136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.665206 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.718343 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.718406 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.718450 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.718497 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.718640 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.718707 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:18.718686574 +0000 UTC m=+51.279167655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719179 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719448 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:18.719421183 +0000 UTC m=+51.279902284 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719212 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719710 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719839 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719998 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:18.719983038 +0000 UTC m=+51.280464159 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.719282 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.720225 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.720381 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.720534 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:18.720518382 +0000 UTC m=+51.280999503 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.768071 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.768115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.768128 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.768144 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.768190 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.811787 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:02 crc kubenswrapper[4970]: E1209 12:07:02.811915 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.870763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.870792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.870800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.870813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.870823 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.974107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.974162 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.974179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.974203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:02 crc kubenswrapper[4970]: I1209 12:07:02.974223 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:02Z","lastTransitionTime":"2025-12-09T12:07:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.077018 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.077109 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.077127 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.077157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.077180 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.180606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.180661 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.180680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.180704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.180721 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.283907 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.283957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.283968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.283987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.283998 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.387590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.387649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.387671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.387816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.387842 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.490444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.490504 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.490524 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.490551 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.490569 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.593194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.593285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.593303 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.593325 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.593341 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.628075 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:03 crc kubenswrapper[4970]: E1209 12:07:03.628441 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:03 crc kubenswrapper[4970]: E1209 12:07:03.628561 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:07:05.628526495 +0000 UTC m=+38.189007586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.695988 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.696049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.696072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.696104 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.696134 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.798976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.799031 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.799049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.799074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.799092 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.811917 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.812027 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.812040 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:03 crc kubenswrapper[4970]: E1209 12:07:03.812109 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:03 crc kubenswrapper[4970]: E1209 12:07:03.812186 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:03 crc kubenswrapper[4970]: E1209 12:07:03.812355 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.901537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.901576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.901587 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.901606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:03 crc kubenswrapper[4970]: I1209 12:07:03.901617 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:03Z","lastTransitionTime":"2025-12-09T12:07:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.004540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.004583 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.004595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.004611 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.004623 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.106496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.106546 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.106563 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.106586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.106603 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.208954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.209005 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.209021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.209043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.209057 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.311151 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.311197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.311206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.311219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.311227 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.413640 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.413708 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.413722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.413741 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.413752 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.516559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.516614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.516627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.516643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.516654 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.618926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.619313 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.619496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.619681 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.619848 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.722865 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.723312 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.723542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.723740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.723915 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.812355 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:04 crc kubenswrapper[4970]: E1209 12:07:04.812863 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.825966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.826004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.826020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.826042 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.826059 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.929174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.929217 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.929228 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.929267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:04 crc kubenswrapper[4970]: I1209 12:07:04.929280 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:04Z","lastTransitionTime":"2025-12-09T12:07:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.032482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.032875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.033033 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.033295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.033461 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.136491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.136835 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.137027 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.137161 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.137380 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.240367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.240421 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.240440 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.240462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.240479 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.343515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.343570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.343585 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.343605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.343626 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.446025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.446077 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.446088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.446105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.446116 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.548744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.548788 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.548798 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.548812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.548821 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.648035 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:05 crc kubenswrapper[4970]: E1209 12:07:05.648288 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:05 crc kubenswrapper[4970]: E1209 12:07:05.648379 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:07:09.648354854 +0000 UTC m=+42.208835935 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.650966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.651023 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.651045 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.651072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.651094 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.754535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.754615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.754630 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.754653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.754670 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.812467 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.812579 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.812579 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:05 crc kubenswrapper[4970]: E1209 12:07:05.812672 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:05 crc kubenswrapper[4970]: E1209 12:07:05.812903 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:05 crc kubenswrapper[4970]: E1209 12:07:05.813091 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.857224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.857294 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.857305 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.857321 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.857333 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.959926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.959995 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.960009 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.960032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:05 crc kubenswrapper[4970]: I1209 12:07:05.960050 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:05Z","lastTransitionTime":"2025-12-09T12:07:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.063305 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.063363 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.063372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.063384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.063391 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.165812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.165873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.165891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.165914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.165931 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.269032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.269124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.269142 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.269164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.269180 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.371765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.371816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.371825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.371839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.371849 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.474506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.474554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.474566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.474582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.474596 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.577771 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.577828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.577840 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.577856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.577868 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.680513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.680548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.680558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.680573 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.680584 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.783372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.783424 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.783436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.783457 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.783467 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.811626 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:06 crc kubenswrapper[4970]: E1209 12:07:06.811777 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.886667 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.886704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.886712 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.886726 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.886735 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.989860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.990094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.990648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.990713 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:06 crc kubenswrapper[4970]: I1209 12:07:06.990743 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:06Z","lastTransitionTime":"2025-12-09T12:07:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.094674 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.094735 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.094757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.094785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.094808 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.198162 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.198204 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.198215 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.198230 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.198242 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.300949 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.301015 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.301027 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.301044 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.301059 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.403828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.403909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.403934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.403962 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.403982 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.506804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.506854 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.506865 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.506881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.506894 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.609189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.609288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.609307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.609326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.609338 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.712488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.712545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.712553 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.712569 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.712579 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.812035 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.812117 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:07 crc kubenswrapper[4970]: E1209 12:07:07.812206 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:07 crc kubenswrapper[4970]: E1209 12:07:07.812338 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.812389 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:07 crc kubenswrapper[4970]: E1209 12:07:07.812455 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.815280 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.815307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.815324 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.815346 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.815360 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.838612 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.859452 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.880745 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.902023 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.917096 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.917148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.917166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.917188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.917202 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:07Z","lastTransitionTime":"2025-12-09T12:07:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.918521 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.934376 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.948837 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.964391 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:07 crc kubenswrapper[4970]: I1209 12:07:07.982465 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.000595 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:07Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.020377 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.020424 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.020436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.020453 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.020464 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.037933 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.054783 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.072514 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.084530 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.099235 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.122773 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.123061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.123148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.123292 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.123408 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.125903 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.139944 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:08Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.226858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.226901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.226912 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.226929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.226943 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.329856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.329922 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.329934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.329952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.329966 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.432546 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.432624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.432638 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.432656 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.432668 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.536083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.536136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.536153 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.536176 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.536194 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.638678 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.638799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.638830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.638863 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.638885 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.741940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.741977 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.741985 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.742003 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.742014 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.812540 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:08 crc kubenswrapper[4970]: E1209 12:07:08.812764 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.844584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.844627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.844638 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.844656 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.844668 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.947726 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.947765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.947776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.947790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:08 crc kubenswrapper[4970]: I1209 12:07:08.947818 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:08Z","lastTransitionTime":"2025-12-09T12:07:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.050753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.050797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.050806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.050820 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.050829 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.154377 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.154461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.154498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.154527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.154550 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.257651 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.257709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.257722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.257742 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.257754 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.360757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.360829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.360856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.360885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.360909 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.464489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.464576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.464603 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.464630 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.464649 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.568011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.568086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.568116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.568147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.568168 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.671459 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.671542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.671565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.671595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.671619 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.690442 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:09 crc kubenswrapper[4970]: E1209 12:07:09.690639 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:09 crc kubenswrapper[4970]: E1209 12:07:09.690731 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:07:17.690708332 +0000 UTC m=+50.251189413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.775116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.775197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.775220 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.775292 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.775322 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.811868 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.811923 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:09 crc kubenswrapper[4970]: E1209 12:07:09.812035 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:09 crc kubenswrapper[4970]: E1209 12:07:09.812154 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.812212 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:09 crc kubenswrapper[4970]: E1209 12:07:09.812375 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.878640 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.878673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.878681 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.878695 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.878703 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.981310 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.981348 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.981358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.981380 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:09 crc kubenswrapper[4970]: I1209 12:07:09.981390 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:09Z","lastTransitionTime":"2025-12-09T12:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.083987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.084045 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.084056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.084074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.084086 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.186824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.186874 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.186886 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.186903 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.186916 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.289821 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.289869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.289882 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.289898 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.289910 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.392774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.392826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.392838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.392854 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.392863 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.495278 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.495322 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.495333 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.495348 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.495359 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.597875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.597917 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.597929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.597946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.597958 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.701361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.701393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.701402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.701415 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.701424 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.803154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.803189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.803197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.803209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.803218 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.812653 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:10 crc kubenswrapper[4970]: E1209 12:07:10.812858 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.905717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.905790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.905803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.905820 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:10 crc kubenswrapper[4970]: I1209 12:07:10.905834 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:10Z","lastTransitionTime":"2025-12-09T12:07:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.008829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.008906 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.008929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.008959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.008983 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.026027 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.026093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.026105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.026127 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.026143 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.047194 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.051450 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.051489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.051501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.051520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.051532 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.066080 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.070331 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.070428 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.070454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.070485 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.070509 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.089238 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.093388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.093445 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.093466 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.093491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.093507 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.108971 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.113884 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.113944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.113960 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.113982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.113998 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.128779 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:11Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.128940 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.130676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.130710 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.130722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.130739 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.130752 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.234169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.234273 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.234291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.234324 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.234345 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.337044 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.337093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.337105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.337124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.337135 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.439952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.440012 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.440022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.440041 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.440052 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.542441 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.542494 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.542505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.542525 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.542537 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.646452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.646527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.646553 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.646584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.646607 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.750067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.750117 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.750134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.750157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.750175 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.812043 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.812094 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.812309 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.812319 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.812416 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:11 crc kubenswrapper[4970]: E1209 12:07:11.812736 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.853465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.853566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.853586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.853611 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.853660 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.956531 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.956567 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.956576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.956589 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:11 crc kubenswrapper[4970]: I1209 12:07:11.956597 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:11Z","lastTransitionTime":"2025-12-09T12:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.059146 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.059265 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.059279 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.059299 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.059311 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.162061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.162125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.162141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.162163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.162177 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.264595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.264649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.264666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.264689 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.264706 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.367934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.367986 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.367998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.368016 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.368029 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.471592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.471649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.471667 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.471690 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.471706 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.574412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.574489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.574507 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.574530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.574547 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.677293 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.677345 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.677360 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.677383 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.677400 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.781011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.781097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.781124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.781154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.781176 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.812470 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:12 crc kubenswrapper[4970]: E1209 12:07:12.812649 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.884438 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.884498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.884517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.884542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.884559 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.987394 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.987601 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.987618 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.987637 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:12 crc kubenswrapper[4970]: I1209 12:07:12.987649 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:12Z","lastTransitionTime":"2025-12-09T12:07:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.091599 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.091661 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.091686 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.091718 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.091743 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.194563 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.194637 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.194657 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.194683 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.194706 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.297697 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.297802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.297826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.297852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.297868 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.400933 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.401009 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.401043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.401078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.401102 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.504990 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.505063 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.505080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.505111 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.505133 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.608748 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.608817 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.608841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.608871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.608893 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.711379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.711455 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.711474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.711497 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.711513 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.812021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.812112 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.812329 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:13 crc kubenswrapper[4970]: E1209 12:07:13.812329 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:13 crc kubenswrapper[4970]: E1209 12:07:13.812504 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:13 crc kubenswrapper[4970]: E1209 12:07:13.812651 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.814187 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.814276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.814308 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.814337 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.814362 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.917760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.918379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.918446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.918501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:13 crc kubenswrapper[4970]: I1209 12:07:13.918527 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:13Z","lastTransitionTime":"2025-12-09T12:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.022023 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.022313 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.022441 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.022543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.022648 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.124969 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.125022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.125032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.125050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.125060 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.227077 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.227131 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.227142 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.227157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.227167 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.329848 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.329886 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.329897 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.329911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.329921 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.432479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.432541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.432558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.432583 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.432601 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.535740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.535837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.535857 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.535881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.535898 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.638058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.638156 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.638173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.638197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.638213 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.741052 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.741115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.741133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.741159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.741177 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.812688 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:14 crc kubenswrapper[4970]: E1209 12:07:14.812844 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.812944 4970 scope.go:117] "RemoveContainer" containerID="27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.844418 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.844706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.844723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.844752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.844769 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.947021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.947081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.947097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.947121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:14 crc kubenswrapper[4970]: I1209 12:07:14.947139 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:14Z","lastTransitionTime":"2025-12-09T12:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.048856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.048914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.048932 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.048957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.048974 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.137814 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/1.log" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.140098 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.140628 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.151236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.151303 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.151312 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.151326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.151337 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.164560 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.179991 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.193030 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.202998 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.214489 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.230493 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.253493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.253540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.253570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.253592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.253607 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.258183 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.269872 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.287959 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.304138 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.320543 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.344863 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.355875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.355922 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.355935 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.355954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.355965 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.358071 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.370726 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.381336 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.394123 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.404935 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:15Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.457839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.457882 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.457891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.457907 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.457920 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.560873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.560938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.560957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.560982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.561000 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.663963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.664039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.664060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.664091 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.664113 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.765950 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.766018 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.766036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.766063 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.766079 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.812660 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.812692 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.812703 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:15 crc kubenswrapper[4970]: E1209 12:07:15.812802 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:15 crc kubenswrapper[4970]: E1209 12:07:15.812925 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:15 crc kubenswrapper[4970]: E1209 12:07:15.813088 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.868595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.868654 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.868672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.868697 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.868714 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.970704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.970799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.970835 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.970865 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:15 crc kubenswrapper[4970]: I1209 12:07:15.970889 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:15Z","lastTransitionTime":"2025-12-09T12:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.073415 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.073705 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.073897 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.074094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.074318 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.146807 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/2.log" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.148003 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/1.log" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.152091 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613" exitCode=1 Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.152151 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.152204 4970 scope.go:117] "RemoveContainer" containerID="27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.153352 4970 scope.go:117] "RemoveContainer" containerID="be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613" Dec 09 12:07:16 crc kubenswrapper[4970]: E1209 12:07:16.153631 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.176280 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.177135 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.177375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.177530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.177670 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.177786 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.198227 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.221728 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.240088 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.264288 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.281344 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.281416 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.281438 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.281468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.281490 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.284572 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.305020 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.326388 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.344360 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.363479 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.380715 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.383898 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.383947 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.383965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.383989 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.384005 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.414732 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.434831 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.451119 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.470157 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.486299 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.486361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.486373 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.486388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.486399 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.493508 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27dadb0f8a4d6abf2de751664f0a2a23841bb086dad0695553e5252f8f3aee17\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"message\\\":\\\"occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:06:58Z is after 2025-08-24T17:21:41Z]\\\\nI1209 12:06:58.869760 6359 services_controller.go:434] Service openshift-authentication-operator/metrics retrieved from lister for network=default: \\\\u0026Service{ObjectMeta:{metrics openshift-authentication-operator 7e1891d1-ffe3-468a-92bb-d123cc31fa0b 4081 0 2025-02-23 05:12:19 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:authentication-operator] map[include.release.openshift.io/ibm-cloud-managed:true include.release.openshift.io/self-managed-high-availability:true include.release.openshift.io/single-node-developer:true service.alpha.openshift.io/serving-cert-secret-name:serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc0006c2647 \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: authentication-operato\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.509310 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:16Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.589158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.589282 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.589300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.589322 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.589339 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.691884 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.692237 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.692466 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.692609 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.692730 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.795478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.795735 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.795805 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.795883 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.795950 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.812008 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:16 crc kubenswrapper[4970]: E1209 12:07:16.812316 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.898799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.898860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.898878 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.898902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:16 crc kubenswrapper[4970]: I1209 12:07:16.898919 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:16Z","lastTransitionTime":"2025-12-09T12:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.003107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.003159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.003176 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.003199 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.003216 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.106526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.106576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.106594 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.106616 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.106634 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.158623 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/2.log" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.162158 4970 scope.go:117] "RemoveContainer" containerID="be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613" Dec 09 12:07:17 crc kubenswrapper[4970]: E1209 12:07:17.162345 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.176205 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.204279 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.208794 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.208847 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.208861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.208881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.208897 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.221057 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.236992 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.253711 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.266483 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.284285 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.295737 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.309096 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.311273 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.311340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.311358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.311383 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.311402 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.321043 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.338202 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.347951 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.378884 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.396729 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.411873 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.413574 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.413608 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.413624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.413660 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.413669 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.425357 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.434700 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.516728 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.516856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.516880 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.516903 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.516920 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.620379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.620435 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.620446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.620463 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.620474 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.726400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.726442 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.726453 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.726468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.726481 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.779382 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:17 crc kubenswrapper[4970]: E1209 12:07:17.779596 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:17 crc kubenswrapper[4970]: E1209 12:07:17.780032 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:07:33.780003117 +0000 UTC m=+66.340484198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.812381 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.812381 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:17 crc kubenswrapper[4970]: E1209 12:07:17.812602 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:17 crc kubenswrapper[4970]: E1209 12:07:17.812733 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.813058 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:17 crc kubenswrapper[4970]: E1209 12:07:17.813427 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.828584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.828698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.828770 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.828803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.828826 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.837231 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.852373 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.866843 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.880634 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.893054 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.907902 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.919068 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.930703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.930783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.930792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.930812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.930822 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:17Z","lastTransitionTime":"2025-12-09T12:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.931812 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.943050 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.967097 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.981417 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:17 crc kubenswrapper[4970]: I1209 12:07:17.994414 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:17Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.006841 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.018267 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.030740 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.033162 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.033190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.033203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.033219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.033230 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.049516 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.064223 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.136062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.136147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.136174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.136201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.136219 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.239307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.239389 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.239406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.239433 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.239457 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.296271 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.307345 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.312201 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.330536 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.341881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.341932 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.341948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.341971 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.341988 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.347121 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.363792 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.392107 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.409029 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.440687 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.445778 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.445838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.445856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.445881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.445898 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.455893 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.470729 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.484132 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.495494 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.508869 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.532112 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.543452 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.548356 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.548398 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.548411 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.548427 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.548438 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.558784 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.573473 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.586920 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:18Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.651673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.651951 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.651987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.652018 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.652040 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.687776 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.688048 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:07:50.688013019 +0000 UTC m=+83.248494130 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.755748 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.755829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.755851 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.755875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.755892 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.788845 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.788913 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.788996 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.789055 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789069 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789119 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789144 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789206 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789240 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:50.789199192 +0000 UTC m=+83.349680283 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789339 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:50.789319805 +0000 UTC m=+83.349800906 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789361 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789433 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:50.789409997 +0000 UTC m=+83.349891088 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789593 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789634 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789653 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.789768 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:07:50.789742826 +0000 UTC m=+83.350223907 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.811619 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:18 crc kubenswrapper[4970]: E1209 12:07:18.811854 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.859240 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.859342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.859366 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.859394 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.859418 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.961565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.961601 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.961612 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.961627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:18 crc kubenswrapper[4970]: I1209 12:07:18.961637 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:18Z","lastTransitionTime":"2025-12-09T12:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.064715 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.064784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.064806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.064834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.064857 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.167049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.167112 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.167136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.167166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.167188 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.270432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.270539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.270559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.270584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.270602 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.373474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.373549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.373569 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.373592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.373609 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.476969 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.477039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.477062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.477090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.477113 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.580190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.580231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.580260 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.580276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.580286 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.683616 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.683666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.683682 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.683705 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.683722 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.786584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.786645 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.786661 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.786683 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.786698 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.812671 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.812875 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:19 crc kubenswrapper[4970]: E1209 12:07:19.812886 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.812928 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:19 crc kubenswrapper[4970]: E1209 12:07:19.813001 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:19 crc kubenswrapper[4970]: E1209 12:07:19.813063 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.890586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.890653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.890666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.890692 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.890712 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.993796 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.993866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.993875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.993894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:19 crc kubenswrapper[4970]: I1209 12:07:19.993903 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:19Z","lastTransitionTime":"2025-12-09T12:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.097036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.097092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.097103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.097121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.097133 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.199648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.199717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.199733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.199761 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.199779 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.302072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.302110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.302125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.302141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.302151 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.404402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.404910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.405027 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.405125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.405206 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.508522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.508860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.508989 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.509167 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.509367 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.612609 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.612680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.612706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.612736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.612761 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.715813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.715904 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.715945 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.715981 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.716006 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.811508 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:20 crc kubenswrapper[4970]: E1209 12:07:20.811934 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.819080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.819220 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.819328 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.819408 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.819435 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.922043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.922108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.922127 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.922151 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:20 crc kubenswrapper[4970]: I1209 12:07:20.922170 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:20Z","lastTransitionTime":"2025-12-09T12:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.025432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.025501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.025518 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.025547 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.025565 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.128830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.128902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.128924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.128954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.128977 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.232675 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.232800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.232828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.232904 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.232931 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.320004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.320055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.320068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.320088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.320102 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.331241 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.334004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.334049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.334064 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.334081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.334097 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.345985 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.350371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.350417 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.350427 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.350446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.350460 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.368939 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.372202 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.372238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.372250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.372275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.372287 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.383186 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.386789 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.386840 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.386851 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.386867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.386878 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.406312 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:21Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.406534 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.408529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.408588 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.408602 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.408621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.408634 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.510866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.510901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.510909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.510922 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.510931 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.613442 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.613480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.613491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.613506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.613518 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.716034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.716124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.716143 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.716166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.716183 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.812335 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.812352 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.812495 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.812700 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.812792 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:21 crc kubenswrapper[4970]: E1209 12:07:21.812917 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.818173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.818197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.818205 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.818218 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.818227 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.921307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.921354 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.921366 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.921382 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:21 crc kubenswrapper[4970]: I1209 12:07:21.921393 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:21Z","lastTransitionTime":"2025-12-09T12:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.024706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.024765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.024778 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.024796 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.024809 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.126867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.126905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.126915 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.126931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.126942 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.229583 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.229653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.229665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.229683 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.229694 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.332639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.332701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.332724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.332753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.332775 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.435936 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.436013 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.436036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.436066 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.436091 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.539389 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.539453 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.539470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.539493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.539510 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.643445 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.643496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.643516 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.643543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.643609 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.746868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.746943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.746967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.746992 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.747009 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.812209 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:22 crc kubenswrapper[4970]: E1209 12:07:22.812425 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.849917 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.849970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.849986 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.850007 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.850026 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.954171 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.954303 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.954326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.954352 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:22 crc kubenswrapper[4970]: I1209 12:07:22.954370 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:22Z","lastTransitionTime":"2025-12-09T12:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.057666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.057744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.057769 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.057802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.057827 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.161381 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.161465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.161478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.161503 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.161518 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.264776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.264858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.264875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.264903 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.264922 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.369157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.369203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.369219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.369238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.369282 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.472501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.472572 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.472589 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.472613 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.472637 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.576179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.576232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.576266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.576285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.576297 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.679887 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.679954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.679970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.680003 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.680018 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.782506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.782569 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.782587 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.782612 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.782628 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.812716 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:23 crc kubenswrapper[4970]: E1209 12:07:23.812881 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.813140 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.813151 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:23 crc kubenswrapper[4970]: E1209 12:07:23.813254 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:23 crc kubenswrapper[4970]: E1209 12:07:23.813513 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.886578 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.886661 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.886680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.886706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.886722 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.988984 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.989039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.989051 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.989069 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:23 crc kubenswrapper[4970]: I1209 12:07:23.989081 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:23Z","lastTransitionTime":"2025-12-09T12:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.091734 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.091804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.091826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.091856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.091879 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.194206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.194270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.194279 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.194294 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.194303 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.297542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.297609 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.297628 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.297652 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.297668 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.400555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.400596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.400605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.400617 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.400626 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.504031 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.504078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.504088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.504103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.504142 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.606747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.606797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.606805 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.606822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.606832 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.709591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.709878 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.709976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.710072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.710091 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.811456 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:24 crc kubenswrapper[4970]: E1209 12:07:24.811647 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.812914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.812965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.812977 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.812994 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.813006 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.915622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.915673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.915682 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.915698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:24 crc kubenswrapper[4970]: I1209 12:07:24.915708 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:24Z","lastTransitionTime":"2025-12-09T12:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.019135 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.019195 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.019213 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.019237 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.019325 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.122744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.122792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.122804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.122824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.122836 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.226071 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.226125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.226144 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.226170 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.226186 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.328731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.328793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.328811 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.328838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.328855 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.431474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.431528 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.431538 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.431557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.431568 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.533885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.533918 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.533926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.533955 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.533965 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.635876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.636152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.636222 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.636314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.636382 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.741366 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.741411 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.741420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.741435 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.741443 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.812191 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:25 crc kubenswrapper[4970]: E1209 12:07:25.812410 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.812507 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.812200 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:25 crc kubenswrapper[4970]: E1209 12:07:25.812643 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:25 crc kubenswrapper[4970]: E1209 12:07:25.812727 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.844347 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.844400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.844413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.844426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.844435 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.947941 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.948018 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.948054 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.948091 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:25 crc kubenswrapper[4970]: I1209 12:07:25.948158 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:25Z","lastTransitionTime":"2025-12-09T12:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.050824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.050875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.050891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.050914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.050936 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.153639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.153695 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.153713 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.153738 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.153755 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.257002 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.257074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.257087 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.257105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.257117 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.359808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.359861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.359876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.359895 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.359909 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.462066 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.462108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.462120 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.462136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.462147 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.565203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.565323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.565343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.565367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.565383 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.668905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.668967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.668983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.669007 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.669029 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.772718 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.772802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.772824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.772849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.772868 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.811835 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:26 crc kubenswrapper[4970]: E1209 12:07:26.812092 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.875433 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.875488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.875526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.876022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.876049 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.978200 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.978267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.978281 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.978313 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:26 crc kubenswrapper[4970]: I1209 12:07:26.978335 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:26Z","lastTransitionTime":"2025-12-09T12:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.080064 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.080108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.080120 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.080136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.080147 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.182793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.182852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.182872 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.182897 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.182914 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.285130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.285181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.285193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.285210 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.285224 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.388214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.388270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.388279 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.388297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.388307 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.491456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.491530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.491552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.491582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.491600 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.593837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.593879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.593887 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.593902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.593911 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.695948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.695983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.695994 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.696009 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.696019 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.797946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.798008 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.798020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.798039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.798052 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.811537 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.811566 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:27 crc kubenswrapper[4970]: E1209 12:07:27.811650 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.811671 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:27 crc kubenswrapper[4970]: E1209 12:07:27.811746 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:27 crc kubenswrapper[4970]: E1209 12:07:27.811918 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.824312 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.847925 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.866515 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.882459 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.900417 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.901006 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.901035 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.901044 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.901059 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.901068 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:27Z","lastTransitionTime":"2025-12-09T12:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.917880 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.932613 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.944823 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.963370 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:27 crc kubenswrapper[4970]: I1209 12:07:27.975864 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.000032 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:27Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.003704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.003768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.003787 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.003812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.003832 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.013923 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.028478 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.040445 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.055346 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.075826 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.101459 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.105558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.105618 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.105634 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.105653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.105670 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.116794 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:28Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.208040 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.208115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.208140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.208166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.208184 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.311518 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.311591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.311614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.311643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.311667 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.414759 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.414838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.414873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.414904 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.414927 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.517529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.517594 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.517606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.517623 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.517636 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.619806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.620342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.620364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.620389 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.620407 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.723063 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.723133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.723147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.723163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.723174 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.812147 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:28 crc kubenswrapper[4970]: E1209 12:07:28.812313 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.825615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.825676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.825694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.825717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.825730 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.928159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.928210 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.928219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.928234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:28 crc kubenswrapper[4970]: I1209 12:07:28.928261 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:28Z","lastTransitionTime":"2025-12-09T12:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.032098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.032168 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.032186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.032206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.032278 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.135499 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.135598 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.135621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.135648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.135737 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.239017 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.239073 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.239088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.239105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.239115 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.342371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.342438 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.342461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.342491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.342515 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.445986 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.446030 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.446039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.446055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.446066 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.548869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.548926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.548942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.548961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.548972 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.651386 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.651437 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.651452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.651476 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.651492 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.754711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.754776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.754793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.754816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.754834 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.812686 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.812805 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.812715 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:29 crc kubenswrapper[4970]: E1209 12:07:29.812917 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:29 crc kubenswrapper[4970]: E1209 12:07:29.813054 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:29 crc kubenswrapper[4970]: E1209 12:07:29.813225 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.857706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.857775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.857797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.857825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.857848 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.960090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.960197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.960315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.960344 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:29 crc kubenswrapper[4970]: I1209 12:07:29.960409 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:29Z","lastTransitionTime":"2025-12-09T12:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.063061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.063121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.063138 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.063161 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.063177 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.165934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.166005 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.166029 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.166058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.166081 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.269602 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.269691 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.269715 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.269746 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.269767 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.372723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.372790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.372813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.372842 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.372866 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.475451 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.475517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.475535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.475559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.475576 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.578792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.578849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.578867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.578890 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.578906 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.682071 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.682136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.682154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.682182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.682199 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.785571 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.785652 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.785671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.785697 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.785718 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.812274 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:30 crc kubenswrapper[4970]: E1209 12:07:30.812440 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.888973 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.889022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.889034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.889050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.889061 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.992291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.992351 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.992365 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.992383 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:30 crc kubenswrapper[4970]: I1209 12:07:30.992396 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:30Z","lastTransitionTime":"2025-12-09T12:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.095004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.095053 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.095069 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.095092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.095108 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.197823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.197890 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.197906 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.197932 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.197953 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.301750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.301820 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.301837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.301862 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.301879 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.404941 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.405029 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.405051 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.405082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.405105 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.507568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.507630 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.507649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.507674 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.507693 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.535684 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.535742 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.535760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.535785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.535803 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.552920 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.558070 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.558276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.558339 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.558405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.558461 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.575927 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.580709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.580774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.580801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.580834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.580858 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.599661 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.603517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.603554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.603566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.603583 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.603596 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.622474 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.626315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.626362 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.626379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.626399 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.626415 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.642688 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:31Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.642828 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.644274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.644327 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.644344 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.644365 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.644380 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.746917 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.746976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.746994 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.747017 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.747034 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.811883 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.811983 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.812004 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.812041 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.812081 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.812536 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.812657 4970 scope.go:117] "RemoveContainer" containerID="be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613" Dec 09 12:07:31 crc kubenswrapper[4970]: E1209 12:07:31.812790 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.849973 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.850042 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.850057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.850080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.850101 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.951994 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.952025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.952035 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.952047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:31 crc kubenswrapper[4970]: I1209 12:07:31.952055 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:31Z","lastTransitionTime":"2025-12-09T12:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.055052 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.055093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.055103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.055117 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.055128 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.157388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.157444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.157458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.157478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.157492 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.260187 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.260270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.260282 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.260297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.260309 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.362732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.362863 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.362885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.362910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.362928 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.465123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.465165 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.465179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.465225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.465239 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.567491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.567532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.567541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.567557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.567566 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.669747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.669816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.669830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.669846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.669859 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.772710 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.772748 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.772756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.772773 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.772782 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.812397 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:32 crc kubenswrapper[4970]: E1209 12:07:32.812546 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.875014 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.875061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.875072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.875090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.875103 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.978492 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.978548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.978564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.978591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:32 crc kubenswrapper[4970]: I1209 12:07:32.978604 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:32Z","lastTransitionTime":"2025-12-09T12:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.082463 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.082526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.082539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.082557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.082570 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.186034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.186083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.186095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.186113 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.186125 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.289271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.289313 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.289323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.289340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.289351 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.391921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.391974 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.391986 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.392008 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.392026 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.494706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.494791 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.494803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.494830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.494844 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.598108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.598160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.598174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.598193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.598209 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.700755 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.700794 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.700807 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.700828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.700843 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.805145 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.805196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.805219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.805242 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.805278 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.812545 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.812576 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.812588 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:33 crc kubenswrapper[4970]: E1209 12:07:33.812721 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:33 crc kubenswrapper[4970]: E1209 12:07:33.812826 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:33 crc kubenswrapper[4970]: E1209 12:07:33.812953 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.856344 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:33 crc kubenswrapper[4970]: E1209 12:07:33.856554 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:33 crc kubenswrapper[4970]: E1209 12:07:33.856641 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:08:05.856624143 +0000 UTC m=+98.417105194 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.908547 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.908605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.908617 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.908639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:33 crc kubenswrapper[4970]: I1209 12:07:33.908659 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:33Z","lastTransitionTime":"2025-12-09T12:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.010728 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.010768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.010782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.010799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.010809 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.112790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.112845 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.112861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.112881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.112896 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.215467 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.215507 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.215517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.215535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.215543 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.317940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.317985 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.317995 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.318011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.318022 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.420050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.420090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.420103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.420118 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.420141 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.522448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.522512 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.522528 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.522544 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.522555 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.625461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.625497 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.625506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.625519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.625555 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.728458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.728571 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.728596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.728627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.728651 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.811964 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:34 crc kubenswrapper[4970]: E1209 12:07:34.812111 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.831230 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.831323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.831349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.831364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.831398 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.933360 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.933446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.933460 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.933500 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:34 crc kubenswrapper[4970]: I1209 12:07:34.933517 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:34Z","lastTransitionTime":"2025-12-09T12:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.035528 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.035572 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.035584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.035611 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.035626 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.137824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.137864 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.137877 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.137893 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.137904 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.219299 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/0.log" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.219354 4970 generic.go:334] "Generic (PLEG): container finished" podID="81da4c74-d93e-4a7a-848a-c3539268368b" containerID="65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75" exitCode=1 Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.219399 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerDied","Data":"65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.219807 4970 scope.go:117] "RemoveContainer" containerID="65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.232490 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.240404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.240443 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.240454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.240469 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.240481 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.247294 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.260016 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.272450 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.288879 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.306980 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.329977 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.342703 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.347505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.347556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.347598 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.347620 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.347629 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.355771 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.365803 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.377742 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.390794 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.411992 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.422747 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.433520 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.447615 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.449468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.449508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.449519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.449537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.449549 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.462080 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.475681 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:35Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.552090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.552170 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.552185 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.552220 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.552233 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.654470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.654513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.654522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.654537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.654546 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.757211 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.757622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.757636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.757653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.757663 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.812068 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.812097 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.812126 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:35 crc kubenswrapper[4970]: E1209 12:07:35.812208 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:35 crc kubenswrapper[4970]: E1209 12:07:35.812276 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:35 crc kubenswrapper[4970]: E1209 12:07:35.812338 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.859713 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.859761 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.859775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.859794 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.859807 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.962475 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.962516 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.962526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.962540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:35 crc kubenswrapper[4970]: I1209 12:07:35.962549 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:35Z","lastTransitionTime":"2025-12-09T12:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.065342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.065388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.065402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.065420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.065428 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.168152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.168191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.168202 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.168218 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.168228 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.223675 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/0.log" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.223745 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerStarted","Data":"e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.238731 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.255787 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.269314 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.270905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.270945 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.270959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.270978 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.270991 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.286713 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.299576 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.312905 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.328583 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.342088 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.356536 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.370117 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.373574 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.373645 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.373670 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.373701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.373722 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.403640 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.432013 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.445472 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.456868 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.470336 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.479232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.479290 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.479301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.479318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.479328 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.486282 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.496722 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.506604 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:36Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.581701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.581756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.581767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.581781 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.581791 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.684261 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.684300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.684309 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.684325 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.684335 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.786917 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.786953 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.786964 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.786979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.786990 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.811902 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:36 crc kubenswrapper[4970]: E1209 12:07:36.812087 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.889828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.889894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.889913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.889935 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.889952 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.992699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.992808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.992876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.992901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:36 crc kubenswrapper[4970]: I1209 12:07:36.992917 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:36Z","lastTransitionTime":"2025-12-09T12:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.095679 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.095750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.095766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.095789 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.095808 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.198080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.198125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.198133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.198147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.198158 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.300632 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.300672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.300684 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.300699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.300709 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.403673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.403730 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.403746 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.403799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.403811 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.506696 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.506729 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.506737 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.506750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.506759 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.608797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.608839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.608851 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.608867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.608880 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.710767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.710802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.710812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.710826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.710837 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.812198 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.812381 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:37 crc kubenswrapper[4970]: E1209 12:07:37.812457 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.812522 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:37 crc kubenswrapper[4970]: E1209 12:07:37.812660 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:37 crc kubenswrapper[4970]: E1209 12:07:37.812806 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.814136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.814169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.814181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.814203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.814215 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.827715 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.844041 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.861996 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.882473 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.893790 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.908311 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.916633 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.916736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.916750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.916765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.916801 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:37Z","lastTransitionTime":"2025-12-09T12:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.922097 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.936340 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.948807 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.958900 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.971145 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:37 crc kubenswrapper[4970]: I1209 12:07:37.982350 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.000059 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:37Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.010992 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.019237 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.019301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.019313 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.019332 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.019343 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.024241 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.037964 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.055953 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.071263 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:38Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.121315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.121382 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.121395 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.121426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.121437 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.223577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.223648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.223660 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.223679 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.223692 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.328450 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.328509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.328519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.328535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.328546 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.430425 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.430663 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.430727 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.430786 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.430847 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.533715 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.533779 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.533793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.533816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.533830 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.637809 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.637863 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.637879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.637902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.637919 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.811576 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:38 crc kubenswrapper[4970]: E1209 12:07:38.811722 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.939388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.939436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.939444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.939458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:38 crc kubenswrapper[4970]: I1209 12:07:38.939469 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:38Z","lastTransitionTime":"2025-12-09T12:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.041284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.041325 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.041334 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.041350 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.041362 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.144575 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.144613 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.144622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.144636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.144646 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.247023 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.247068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.247080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.247098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.247110 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.350406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.350479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.350496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.350521 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.350540 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.453225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.453281 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.453291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.453304 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.453315 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.555752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.555789 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.555799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.555814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.555824 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.658340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.658392 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.658404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.658422 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.658434 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.760995 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.761028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.761037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.761049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.761059 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.812402 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.812421 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.812519 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:39 crc kubenswrapper[4970]: E1209 12:07:39.812611 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:39 crc kubenswrapper[4970]: E1209 12:07:39.812694 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:39 crc kubenswrapper[4970]: E1209 12:07:39.812916 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.863495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.863540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.863549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.863564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.863574 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.966349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.966389 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.966401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.966416 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:39 crc kubenswrapper[4970]: I1209 12:07:39.966425 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:39Z","lastTransitionTime":"2025-12-09T12:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.068352 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.068408 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.068422 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.068442 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.068454 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.170477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.170545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.170565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.170590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.170612 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.272026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.272073 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.272082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.272097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.272107 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.375938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.375978 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.375988 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.376005 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.376019 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.478476 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.478524 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.478536 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.478552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.478564 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.581270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.581306 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.581316 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.581330 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.581338 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.683764 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.683809 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.683818 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.683837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.683845 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.787140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.787198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.787216 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.787277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.787297 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.811845 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:40 crc kubenswrapper[4970]: E1209 12:07:40.812016 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.890379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.890427 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.890442 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.890458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.890470 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.992979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.993026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.993037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.993057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:40 crc kubenswrapper[4970]: I1209 12:07:40.993070 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:40Z","lastTransitionTime":"2025-12-09T12:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.095896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.095944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.095954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.095968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.095977 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.198448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.198508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.198528 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.198552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.198568 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.300947 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.300991 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.301005 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.301022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.301035 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.403285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.403322 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.403366 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.403383 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.403397 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.505961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.505995 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.506006 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.506020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.506029 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.608281 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.608320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.608328 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.608343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.608352 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.711123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.711189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.711208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.711285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.711304 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.812381 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.812509 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:41 crc kubenswrapper[4970]: E1209 12:07:41.812500 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.812542 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:41 crc kubenswrapper[4970]: E1209 12:07:41.812798 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:41 crc kubenswrapper[4970]: E1209 12:07:41.812876 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.814000 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.814083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.814110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.814138 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.814154 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.916995 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.917060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.917072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.917084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.917094 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.933637 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.933715 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.933736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.933759 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.933775 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: E1209 12:07:41.949327 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.953756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.953812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.953830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.953856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.953874 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: E1209 12:07:41.969149 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.974343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.974389 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.974401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.974421 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.974433 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:41 crc kubenswrapper[4970]: E1209 12:07:41.988961 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:41Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.992319 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.992370 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.992381 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.992398 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:41 crc kubenswrapper[4970]: I1209 12:07:41.992409 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:41Z","lastTransitionTime":"2025-12-09T12:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: E1209 12:07:42.006291 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:42Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.010776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.010819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.010830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.010848 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.010861 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: E1209 12:07:42.023637 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e6d3f717-4497-45c3-b697-e6823c6fdf80\\\",\\\"systemUUID\\\":\\\"44b8bf7c-aaf1-4d6f-acd6-22019bebcd7e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:42Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:42 crc kubenswrapper[4970]: E1209 12:07:42.023859 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.025586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.025634 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.025650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.025673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.025691 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.128111 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.128154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.128168 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.128185 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.128196 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.230558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.230616 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.230638 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.230665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.230681 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.333302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.333338 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.333349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.333366 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.333378 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.436267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.436302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.436314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.436330 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.436341 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.539094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.539130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.539141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.539157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.539168 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.641527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.641579 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.641595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.641620 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.641637 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.744464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.744509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.744525 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.744546 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.744562 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.812377 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:42 crc kubenswrapper[4970]: E1209 12:07:42.812587 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.847856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.847901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.847909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.847924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.847933 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.950404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.950474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.950508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.950538 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:42 crc kubenswrapper[4970]: I1209 12:07:42.950560 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:42Z","lastTransitionTime":"2025-12-09T12:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.052728 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.052770 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.052784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.052804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.052819 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.155280 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.155342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.155359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.155383 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.155401 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.257461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.257522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.257539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.257566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.257584 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.360971 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.360998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.361005 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.361034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.361045 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.464106 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.464192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.464217 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.464282 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.464312 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.568117 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.568190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.568211 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.568236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.568282 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.671626 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.671694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.671712 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.671733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.671749 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.774352 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.774398 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.774406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.774420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.774429 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.812029 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.812178 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.812209 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:43 crc kubenswrapper[4970]: E1209 12:07:43.812329 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:43 crc kubenswrapper[4970]: E1209 12:07:43.812523 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:43 crc kubenswrapper[4970]: E1209 12:07:43.813041 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.813363 4970 scope.go:117] "RemoveContainer" containerID="be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.876977 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.877030 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.877052 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.877073 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.877088 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.979768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.979819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.979831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.979850 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:43 crc kubenswrapper[4970]: I1209 12:07:43.979862 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:43Z","lastTransitionTime":"2025-12-09T12:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.083022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.083065 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.083075 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.083090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.083103 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.185049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.185085 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.185096 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.185112 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.185121 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.286940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.286987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.286998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.287015 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.287027 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.389972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.390016 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.390028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.390046 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.390057 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.492790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.492841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.492852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.492869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.492881 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.594806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.594843 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.594855 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.594872 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.594882 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.697103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.697160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.697178 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.697199 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.697215 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.799448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.799506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.799523 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.799543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.799558 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.811937 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:44 crc kubenswrapper[4970]: E1209 12:07:44.812070 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.902664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.902702 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.902711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.902726 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:44 crc kubenswrapper[4970]: I1209 12:07:44.902735 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:44Z","lastTransitionTime":"2025-12-09T12:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.005861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.005901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.005911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.005934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.005944 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.109028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.109079 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.109093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.109111 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.109124 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.211494 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.211564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.211590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.211621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.211643 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.251502 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/2.log" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.254226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.254759 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.268603 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.289441 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.308780 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.314195 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.314284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.314304 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.314325 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.314341 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.326464 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.353836 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.381546 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.393111 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.408020 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.417118 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.417162 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.417173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.417193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.417206 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.426227 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.443516 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.457221 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.473292 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.487733 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.526444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.526502 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.526514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.526532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.526547 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.530576 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.548749 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.575643 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.586700 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.598297 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:45Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.628811 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.628861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.628877 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.628899 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.628916 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.731802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.731862 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.731879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.731902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.731922 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.812084 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:45 crc kubenswrapper[4970]: E1209 12:07:45.812521 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.812111 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:45 crc kubenswrapper[4970]: E1209 12:07:45.812635 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.812084 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:45 crc kubenswrapper[4970]: E1209 12:07:45.812891 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.834590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.834652 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.834671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.834695 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.834712 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.937331 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.937375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.937400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.937417 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:45 crc kubenswrapper[4970]: I1209 12:07:45.937426 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:45Z","lastTransitionTime":"2025-12-09T12:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.039616 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.039644 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.039653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.039665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.039674 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.142704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.142749 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.142764 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.142786 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.142801 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.245867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.245918 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.245934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.245957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.245974 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.260553 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/3.log" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.261123 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/2.log" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.264431 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" exitCode=1 Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.264474 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.264512 4970 scope.go:117] "RemoveContainer" containerID="be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.269106 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:07:46 crc kubenswrapper[4970]: E1209 12:07:46.269399 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.293771 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.307663 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.322193 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.334140 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.345953 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.348359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.348520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.348604 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.348689 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.348807 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.362078 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.385897 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be6172070a88e0df4826bf5bd31e8a8ca840d5325604ea751ff2a2da37c86613\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:16Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI1209 12:07:15.741219 6578 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI1209 12:07:15.741236 6578 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI1209 12:07:15.741327 6578 factory.go:1336] Added *v1.Node event handler 7\\\\nI1209 12:07:15.741394 6578 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI1209 12:07:15.741844 6578 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI1209 12:07:15.741938 6578 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI1209 12:07:15.741981 6578 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:15.741991 6578 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:15.742084 6578 factory.go:656] Stopping watch factory\\\\nI1209 12:07:15.742109 6578 ovnkube.go:599] Stopped ovnkube\\\\nI1209 12:07:15.742148 6578 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:15.742168 6578 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:15.742183 6578 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 12:07:15.742311 6578 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:45Z\\\",\\\"message\\\":\\\"12:07:45.006866 7006 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:07:45.007987 7006 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:07:45.008053 7006 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:07:45.008102 7006 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:07:45.008129 7006 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:07:45.008156 7006 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:45.008167 7006 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:45.008186 7006 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:07:45.008199 7006 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:07:45.008203 7006 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:45.008220 7006 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:07:45.008231 7006 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:07:45.008290 7006 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:07:45.008325 7006 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:45.008336 7006 factory.go:656] Stopping watch factory\\\\nI1209 12:07:45.008360 7006 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.401198 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.417842 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.434073 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.449589 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.451883 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.452020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.452235 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.452375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.452471 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.466215 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.480169 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.492602 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.505475 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.517790 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.528584 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.544134 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:46Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.555224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.555820 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.555982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.556371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.556697 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.659448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.659776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.659910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.660026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.660144 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.763308 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.763377 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.763395 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.763420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.763437 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.812189 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:46 crc kubenswrapper[4970]: E1209 12:07:46.812727 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.866850 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.866969 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.866989 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.867015 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.867035 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.969401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.969803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.969820 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.969846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:46 crc kubenswrapper[4970]: I1209 12:07:46.969864 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:46Z","lastTransitionTime":"2025-12-09T12:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.072859 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.072924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.072946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.072976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.073001 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.177470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.177535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.177556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.177584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.177606 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.271764 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/3.log" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.277651 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:07:47 crc kubenswrapper[4970]: E1209 12:07:47.278105 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.280291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.280339 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.280359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.280387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.280409 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.302308 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.319391 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.338367 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.360073 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.377099 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.383641 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.383709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.383735 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.383768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.383804 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.398067 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.423834 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:45Z\\\",\\\"message\\\":\\\"12:07:45.006866 7006 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:07:45.007987 7006 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:07:45.008053 7006 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:07:45.008102 7006 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:07:45.008129 7006 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:07:45.008156 7006 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:45.008167 7006 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:45.008186 7006 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:07:45.008199 7006 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:07:45.008203 7006 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:45.008220 7006 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:07:45.008231 7006 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:07:45.008290 7006 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:07:45.008325 7006 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:45.008336 7006 factory.go:656] Stopping watch factory\\\\nI1209 12:07:45.008360 7006 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.437668 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.453458 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.471003 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.486141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.486182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.486192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.486208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.486219 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.489658 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.505791 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.522218 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.538500 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.555912 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.566920 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.589998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.590048 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.590058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.590073 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.590083 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.590572 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.607639 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.692154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.692201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.692211 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.692228 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.692241 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.795607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.795709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.795726 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.795751 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.795764 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.811589 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.811587 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.811809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:47 crc kubenswrapper[4970]: E1209 12:07:47.811909 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:47 crc kubenswrapper[4970]: E1209 12:07:47.812110 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:47 crc kubenswrapper[4970]: E1209 12:07:47.812177 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.833608 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.851022 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-sgdqg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81da4c74-d93e-4a7a-848a-c3539268368b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:34Z\\\",\\\"message\\\":\\\"2025-12-09T12:06:49+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b\\\\n2025-12-09T12:06:49+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_b90a4c06-874d-455b-979f-46683b53967b to /host/opt/cni/bin/\\\\n2025-12-09T12:06:49Z [verbose] multus-daemon started\\\\n2025-12-09T12:06:49Z [verbose] Readiness Indicator file check\\\\n2025-12-09T12:07:34Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s5c5t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-sgdqg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.867558 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a283668d-a884-4d62-95e2-1f0ae672f61c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e0f99b54b4c68dbff89836aae041da85d469d5df28de2d23df843262af2120f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spbcn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rtdjh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.883850 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e10a28a-08f5-4679-9d90-532322e9e87f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:01Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f9jkk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:01Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cp4b2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.899059 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.899130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.899147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.899171 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.899189 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:47Z","lastTransitionTime":"2025-12-09T12:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.905586 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fcdfd55-d327-447e-8f4c-7a84f6ddf65d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe038bd99b053264014f6bdcf6e552df19aeaf0eefb1cad95087668edc14e197\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://968ef513b108b5e6469f2b11033f3c2af6276809b44f88da6dd279ef5c917566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f83e4a0f5cdc0dbab781645e3af71029f89d6e26f881957510b76c4023d49d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a0e892d8f1c25b644b4ec6779f1c1fbf9605d352003feb593ddf48f0b2fe7d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b06c1523a6e3fc383e18aabc13379866e891da9e5f59ccd8098a530e206c780\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f21cf8d759294af934c31eff7010a349e72fe25cf82d68cebc92fec2560fe66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44afc38cd50287cb5e191a23830b83179ff5d89cfc223b4b68cb743f95ef13ed\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd95906d4b225eae59a44e3f0d9e5d3810662c7de1ec832be4c252e924a1483a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.929559 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T12:07:45Z\\\",\\\"message\\\":\\\"12:07:45.006866 7006 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 12:07:45.007987 7006 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1209 12:07:45.008053 7006 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 12:07:45.008102 7006 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1209 12:07:45.008129 7006 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1209 12:07:45.008156 7006 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 12:07:45.008167 7006 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1209 12:07:45.008186 7006 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 12:07:45.008199 7006 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 12:07:45.008203 7006 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 12:07:45.008220 7006 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1209 12:07:45.008231 7006 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1209 12:07:45.008290 7006 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 12:07:45.008325 7006 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 12:07:45.008336 7006 factory.go:656] Stopping watch factory\\\\nI1209 12:07:45.008360 7006 handler.go:208] Removed *v1.Pod ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:07:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdl75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-sxdvn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.944327 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8vkz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4bd9c275-1bfb-4080-8279-3ad903c7fd2f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4da5831e87291bc84ec0b0a2a081543c263a74b18279efd9cb22677db3da456b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9fwqp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:50Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8vkz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.964516 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac79de36387088bb770408fa79e512528cdc6a419390962313b33f82ddf35170\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:47 crc kubenswrapper[4970]: I1209 12:07:47.986134 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"860af16f-74fe-4d2e-95fd-dc63a1975528\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1209 12:06:40.519741 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 12:06:40.522097 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2538622982/tls.crt::/tmp/serving-cert-2538622982/tls.key\\\\\\\"\\\\nI1209 12:06:46.810107 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 12:06:46.818040 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 12:06:46.818087 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 12:06:46.818170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 12:06:46.818192 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 12:06:46.832263 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 12:06:46.832299 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832306 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 12:06:46.832315 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 12:06:46.832320 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 12:06:46.832332 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 12:06:46.832338 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1209 12:06:46.832584 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1209 12:06:46.843031 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.002468 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:47Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c99a0733d681a2255ed6650323cdb8e0759b636118f61f08f3d841d34f81ccb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://174956f8c9198d5c992709045948df1e5c0f4e6b56ea0d0d37f76c078736ab4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:47Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.002823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.002857 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.002874 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.002896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.002913 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.017549 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.038284 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"13a7444e-3a4e-4f76-bc75-31d7955f81f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4dd30e293f9fd27aef372f8f4cba8a786a532a57b089862054f0371fff1cb3b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56d776ac4b03c9fe2c77bb3194ce3e407b542dfa90fd7202cd6f406af4103b9d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1931f2ab695ac019886e253d6ff1ea32af43da13f3b9946b9087b2386ea05361\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d3b52637f1f6d30bd6990b616149e829f5bce7926ee8ef9ae796ec3c52dd67e4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.053025 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:45Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.067307 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54bd37ab38c155cbc907218582095ffb8fc9abcfdf2601d66a44d9fd1d2743b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.088474 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-4gt4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6d4b880d-64b3-496f-9a0c-1d2a26b0e33c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01c6101aab0571a1f01e38ec24c74f99ad040aca13e5c95e91b28d3fc30b11ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rsrdf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-4gt4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.106861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.106915 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.106929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.106961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.106975 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.106485 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-nqntn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0916312-9225-4366-81e2-f4a34d1ae9fe\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://33ed06e2c3b4daaa783d0c108537142a214040f67de82c772e322b2d94768595\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2826a2b92fd74b2874f7fbb44e1ba08187e2c6ad28ca225dc5a7ef27080cfde2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afce63cb59cf8d999aa21367ed141d74e9b143d1d475b470f4edef9ab429db60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d75be85fb08de971f963998444e8655380a53ffbbc510cf94ae7d6a452c8650e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://614d67fd76fffd6010a7938aedf2eb3dc22e302ea496ee323c467439b6c3dbef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d056a3d2eee664131a5e4066e92495a063133d1ad275480730c6aa8e5b65d1cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b68885cc1b173cbfc471f24b1e3a21b3849fcb4bd8124c691428ef22629352d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T12:06:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T12:06:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fwns8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:48Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-nqntn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.121113 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82e1dc9-1c79-493a-96aa-8f710bb2d6c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:07:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92521dbdc6c17a0bce192b9c0f1fed0d09d94bf4e0fe88633be1801c2a7c04e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8459610be11ce0478fe59940d0c9034cb80cab890311db1cb81e2408812decb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:07:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-klc5v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:07:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lmfjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.133326 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f84c045-ff54-41be-b35b-e388a3a6fc99\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T12:06:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://003be2d17eb165e929892363af64e522d110de3c0d16019db4e94908d89c1dce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc77d93f1e39aa0cc812e48444d7de5e72816b09c4643837db3588b772bf22e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e8e52bf68b4eed7e88969291ed4a25797e278c5cf6416171c035b4b1d3f2a2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T12:06:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T12:06:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T12:07:48Z is after 2025-08-24T17:21:41Z" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.210769 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.210821 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.210836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.210856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.210868 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.314124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.314169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.314179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.314195 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.314209 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.416775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.416825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.416839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.416856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.416902 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.519733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.519782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.519793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.519808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.519820 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.621959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.622032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.622048 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.622076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.622096 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.724733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.724804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.724826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.724860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.724882 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.811966 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:48 crc kubenswrapper[4970]: E1209 12:07:48.812644 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.827409 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.828722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.828767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.828786 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.828813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.828829 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.930809 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.930852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.930862 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.930878 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:48 crc kubenswrapper[4970]: I1209 12:07:48.930889 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:48Z","lastTransitionTime":"2025-12-09T12:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.033296 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.033349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.033366 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.033388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.033404 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.135893 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.135942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.135954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.135973 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.135985 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.240798 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.241074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.241099 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.241130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.241146 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.344566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.344635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.344658 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.344690 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.344712 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.447580 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.447637 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.447652 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.447672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.447687 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.550921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.550975 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.550987 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.551017 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.551030 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.653590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.653647 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.653658 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.653670 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.653679 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.757083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.757161 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.757183 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.757214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.757238 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.812038 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.812044 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.812171 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:49 crc kubenswrapper[4970]: E1209 12:07:49.812352 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:49 crc kubenswrapper[4970]: E1209 12:07:49.812444 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:49 crc kubenswrapper[4970]: E1209 12:07:49.812524 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.859780 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.859859 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.859878 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.859905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.859922 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.962749 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.962798 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.962809 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.962828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:49 crc kubenswrapper[4970]: I1209 12:07:49.962843 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:49Z","lastTransitionTime":"2025-12-09T12:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.066288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.066402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.066414 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.066456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.066471 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.169973 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.170041 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.170058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.170086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.170103 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.274316 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.274435 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.274454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.274489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.274509 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.378294 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.378357 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.378431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.378468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.378494 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.482335 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.482483 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.482501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.482542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.482555 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.587320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.587431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.587444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.587495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.587516 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.690899 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.690948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.690960 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.690974 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.690983 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.745914 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.746102 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.746087076 +0000 UTC m=+147.306568117 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.794167 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.794231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.794295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.794326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.794409 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.812143 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.812320 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.847038 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.847129 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.847157 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.847187 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847157 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847233 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847294 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847312 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847324 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847334 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.847316841 +0000 UTC m=+147.407797892 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847234 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847360 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847370 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847370 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.847342722 +0000 UTC m=+147.407823813 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847394 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.847385023 +0000 UTC m=+147.407866074 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:50 crc kubenswrapper[4970]: E1209 12:07:50.847410 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.847402984 +0000 UTC m=+147.407884035 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.899174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.899513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.899529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.899544 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:50 crc kubenswrapper[4970]: I1209 12:07:50.899557 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:50Z","lastTransitionTime":"2025-12-09T12:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.002651 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.002721 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.002744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.002776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.002798 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.105781 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.105849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.105866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.105906 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.105923 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.207551 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.207593 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.207606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.207624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.207636 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.310078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.310146 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.310164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.310188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.310204 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.412874 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.412919 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.412930 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.412946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.412958 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.515807 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.515853 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.515864 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.515881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.515894 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.619121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.619191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.619204 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.619221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.619233 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.721855 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.721913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.721927 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.721950 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.721965 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.811918 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.811983 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.812053 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:51 crc kubenswrapper[4970]: E1209 12:07:51.812167 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:51 crc kubenswrapper[4970]: E1209 12:07:51.812233 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:51 crc kubenswrapper[4970]: E1209 12:07:51.812799 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.824868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.824910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.824922 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.824939 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.824950 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.927628 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.927665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.927677 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.927692 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:51 crc kubenswrapper[4970]: I1209 12:07:51.927702 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:51Z","lastTransitionTime":"2025-12-09T12:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.030019 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.030055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.030067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.030086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.030101 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:52Z","lastTransitionTime":"2025-12-09T12:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.132776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.132834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.132850 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.132869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.132883 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:52Z","lastTransitionTime":"2025-12-09T12:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.234885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.234914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.234923 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.234937 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.234947 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:52Z","lastTransitionTime":"2025-12-09T12:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.337483 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.337512 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.337519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.337531 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.337540 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:52Z","lastTransitionTime":"2025-12-09T12:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.376381 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.376448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.376464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.376863 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.376916 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T12:07:52Z","lastTransitionTime":"2025-12-09T12:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.425968 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth"] Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.426532 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.430130 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.430919 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.431435 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.431598 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.460997 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=34.460980176 podStartE2EDuration="34.460980176s" podCreationTimestamp="2025-12-09 12:07:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.449469582 +0000 UTC m=+85.009950633" watchObservedRunningTime="2025-12-09 12:07:52.460980176 +0000 UTC m=+85.021461227" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.465887 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/074b0889-eb15-48da-af7f-c0e3b73c39f1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.465993 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/074b0889-eb15-48da-af7f-c0e3b73c39f1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.466043 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/074b0889-eb15-48da-af7f-c0e3b73c39f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.466156 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/074b0889-eb15-48da-af7f-c0e3b73c39f1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.466217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/074b0889-eb15-48da-af7f-c0e3b73c39f1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.476976 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.476959435 podStartE2EDuration="4.476959435s" podCreationTimestamp="2025-12-09 12:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.460826351 +0000 UTC m=+85.021307402" watchObservedRunningTime="2025-12-09 12:07:52.476959435 +0000 UTC m=+85.037440486" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.493439 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.493416038 podStartE2EDuration="1m6.493416038s" podCreationTimestamp="2025-12-09 12:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.477457409 +0000 UTC m=+85.037938480" watchObservedRunningTime="2025-12-09 12:07:52.493416038 +0000 UTC m=+85.053897089" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.537424 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lmfjv" podStartSLOduration=65.537403845 podStartE2EDuration="1m5.537403845s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.523877434 +0000 UTC m=+85.084358505" watchObservedRunningTime="2025-12-09 12:07:52.537403845 +0000 UTC m=+85.097884896" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.552341 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.552323004 podStartE2EDuration="1m6.552323004s" podCreationTimestamp="2025-12-09 12:06:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.552032896 +0000 UTC m=+85.112513947" watchObservedRunningTime="2025-12-09 12:07:52.552323004 +0000 UTC m=+85.112804045" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.566642 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/074b0889-eb15-48da-af7f-c0e3b73c39f1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.566681 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/074b0889-eb15-48da-af7f-c0e3b73c39f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.566716 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/074b0889-eb15-48da-af7f-c0e3b73c39f1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.566738 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/074b0889-eb15-48da-af7f-c0e3b73c39f1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.566759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/074b0889-eb15-48da-af7f-c0e3b73c39f1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.567465 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/074b0889-eb15-48da-af7f-c0e3b73c39f1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.567572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/074b0889-eb15-48da-af7f-c0e3b73c39f1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.568654 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/074b0889-eb15-48da-af7f-c0e3b73c39f1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.573876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/074b0889-eb15-48da-af7f-c0e3b73c39f1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.593119 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-4gt4z" podStartSLOduration=65.593099831 podStartE2EDuration="1m5.593099831s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.592906575 +0000 UTC m=+85.153387636" watchObservedRunningTime="2025-12-09 12:07:52.593099831 +0000 UTC m=+85.153580882" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.596908 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/074b0889-eb15-48da-af7f-c0e3b73c39f1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-qkwth\" (UID: \"074b0889-eb15-48da-af7f-c0e3b73c39f1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.646634 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-nqntn" podStartSLOduration=65.646614686 podStartE2EDuration="1m5.646614686s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.618465304 +0000 UTC m=+85.178946355" watchObservedRunningTime="2025-12-09 12:07:52.646614686 +0000 UTC m=+85.207095747" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.662115 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=64.662089501 podStartE2EDuration="1m4.662089501s" podCreationTimestamp="2025-12-09 12:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.648602861 +0000 UTC m=+85.209083952" watchObservedRunningTime="2025-12-09 12:07:52.662089501 +0000 UTC m=+85.222570592" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.679311 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-sgdqg" podStartSLOduration=65.679289354 podStartE2EDuration="1m5.679289354s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.678821891 +0000 UTC m=+85.239302972" watchObservedRunningTime="2025-12-09 12:07:52.679289354 +0000 UTC m=+85.239770435" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.716917 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podStartSLOduration=65.716894622 podStartE2EDuration="1m5.716894622s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.704292487 +0000 UTC m=+85.264773548" watchObservedRunningTime="2025-12-09 12:07:52.716894622 +0000 UTC m=+85.277375693" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.745171 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" Dec 09 12:07:52 crc kubenswrapper[4970]: W1209 12:07:52.759674 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod074b0889_eb15_48da_af7f_c0e3b73c39f1.slice/crio-5a9d1c51c44e8780d620f4b484e7e322822eb0b2b17ec2b203b501d467318673 WatchSource:0}: Error finding container 5a9d1c51c44e8780d620f4b484e7e322822eb0b2b17ec2b203b501d467318673: Status 404 returned error can't find the container with id 5a9d1c51c44e8780d620f4b484e7e322822eb0b2b17ec2b203b501d467318673 Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.778049 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8vkz2" podStartSLOduration=65.778031481 podStartE2EDuration="1m5.778031481s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:52.777708882 +0000 UTC m=+85.338189943" watchObservedRunningTime="2025-12-09 12:07:52.778031481 +0000 UTC m=+85.338512522" Dec 09 12:07:52 crc kubenswrapper[4970]: I1209 12:07:52.811803 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:52 crc kubenswrapper[4970]: E1209 12:07:52.811920 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:53 crc kubenswrapper[4970]: I1209 12:07:53.298920 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" event={"ID":"074b0889-eb15-48da-af7f-c0e3b73c39f1","Type":"ContainerStarted","Data":"697eb23586083271047a9a15339d21cab6bfbcca7f46f372b55e65c8945091c8"} Dec 09 12:07:53 crc kubenswrapper[4970]: I1209 12:07:53.298968 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" event={"ID":"074b0889-eb15-48da-af7f-c0e3b73c39f1","Type":"ContainerStarted","Data":"5a9d1c51c44e8780d620f4b484e7e322822eb0b2b17ec2b203b501d467318673"} Dec 09 12:07:53 crc kubenswrapper[4970]: I1209 12:07:53.313062 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-qkwth" podStartSLOduration=66.313043065 podStartE2EDuration="1m6.313043065s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:07:53.31288803 +0000 UTC m=+85.873369101" watchObservedRunningTime="2025-12-09 12:07:53.313043065 +0000 UTC m=+85.873524116" Dec 09 12:07:53 crc kubenswrapper[4970]: I1209 12:07:53.811967 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:53 crc kubenswrapper[4970]: I1209 12:07:53.812062 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:53 crc kubenswrapper[4970]: I1209 12:07:53.811976 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:53 crc kubenswrapper[4970]: E1209 12:07:53.812091 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:53 crc kubenswrapper[4970]: E1209 12:07:53.812206 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:53 crc kubenswrapper[4970]: E1209 12:07:53.812372 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:54 crc kubenswrapper[4970]: I1209 12:07:54.812189 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:54 crc kubenswrapper[4970]: E1209 12:07:54.812643 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:55 crc kubenswrapper[4970]: I1209 12:07:55.811750 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:55 crc kubenswrapper[4970]: E1209 12:07:55.811892 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:55 crc kubenswrapper[4970]: I1209 12:07:55.812144 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:55 crc kubenswrapper[4970]: I1209 12:07:55.812157 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:55 crc kubenswrapper[4970]: E1209 12:07:55.812431 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:55 crc kubenswrapper[4970]: E1209 12:07:55.812512 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:56 crc kubenswrapper[4970]: I1209 12:07:56.812373 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:56 crc kubenswrapper[4970]: E1209 12:07:56.812478 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:57 crc kubenswrapper[4970]: I1209 12:07:57.811987 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:57 crc kubenswrapper[4970]: I1209 12:07:57.812086 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:57 crc kubenswrapper[4970]: I1209 12:07:57.812142 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:57 crc kubenswrapper[4970]: E1209 12:07:57.814033 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:57 crc kubenswrapper[4970]: E1209 12:07:57.814156 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:07:57 crc kubenswrapper[4970]: E1209 12:07:57.814374 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:58 crc kubenswrapper[4970]: I1209 12:07:58.812485 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:07:58 crc kubenswrapper[4970]: E1209 12:07:58.812983 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:07:59 crc kubenswrapper[4970]: I1209 12:07:59.812477 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:07:59 crc kubenswrapper[4970]: I1209 12:07:59.812579 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:07:59 crc kubenswrapper[4970]: I1209 12:07:59.812589 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:07:59 crc kubenswrapper[4970]: E1209 12:07:59.812686 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:07:59 crc kubenswrapper[4970]: E1209 12:07:59.812832 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:07:59 crc kubenswrapper[4970]: E1209 12:07:59.812888 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:00 crc kubenswrapper[4970]: I1209 12:08:00.812486 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:00 crc kubenswrapper[4970]: E1209 12:08:00.812853 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:01 crc kubenswrapper[4970]: I1209 12:08:01.812481 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:01 crc kubenswrapper[4970]: E1209 12:08:01.812637 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:01 crc kubenswrapper[4970]: I1209 12:08:01.812715 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:01 crc kubenswrapper[4970]: E1209 12:08:01.813068 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:01 crc kubenswrapper[4970]: I1209 12:08:01.813319 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:08:01 crc kubenswrapper[4970]: E1209 12:08:01.813453 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:08:01 crc kubenswrapper[4970]: I1209 12:08:01.813557 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:01 crc kubenswrapper[4970]: E1209 12:08:01.813787 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:02 crc kubenswrapper[4970]: I1209 12:08:02.812309 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:02 crc kubenswrapper[4970]: E1209 12:08:02.812462 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:03 crc kubenswrapper[4970]: I1209 12:08:03.812378 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:03 crc kubenswrapper[4970]: I1209 12:08:03.812441 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:03 crc kubenswrapper[4970]: I1209 12:08:03.812472 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:03 crc kubenswrapper[4970]: E1209 12:08:03.812899 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:03 crc kubenswrapper[4970]: E1209 12:08:03.813333 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:03 crc kubenswrapper[4970]: E1209 12:08:03.813434 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:04 crc kubenswrapper[4970]: I1209 12:08:04.812681 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:04 crc kubenswrapper[4970]: E1209 12:08:04.812909 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:05 crc kubenswrapper[4970]: I1209 12:08:05.811680 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:05 crc kubenswrapper[4970]: I1209 12:08:05.811681 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:05 crc kubenswrapper[4970]: E1209 12:08:05.811806 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:05 crc kubenswrapper[4970]: E1209 12:08:05.811978 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:05 crc kubenswrapper[4970]: I1209 12:08:05.812096 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:05 crc kubenswrapper[4970]: E1209 12:08:05.812223 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:05 crc kubenswrapper[4970]: I1209 12:08:05.910695 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:05 crc kubenswrapper[4970]: E1209 12:08:05.910828 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:08:05 crc kubenswrapper[4970]: E1209 12:08:05.910963 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs podName:5e10a28a-08f5-4679-9d90-532322e9e87f nodeName:}" failed. No retries permitted until 2025-12-09 12:09:09.910946121 +0000 UTC m=+162.471427182 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs") pod "network-metrics-daemon-cp4b2" (UID: "5e10a28a-08f5-4679-9d90-532322e9e87f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 12:08:06 crc kubenswrapper[4970]: I1209 12:08:06.811529 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:06 crc kubenswrapper[4970]: E1209 12:08:06.811654 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:07 crc kubenswrapper[4970]: I1209 12:08:07.811474 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:07 crc kubenswrapper[4970]: I1209 12:08:07.812415 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:07 crc kubenswrapper[4970]: I1209 12:08:07.811559 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:07 crc kubenswrapper[4970]: E1209 12:08:07.813738 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:07 crc kubenswrapper[4970]: E1209 12:08:07.813939 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:07 crc kubenswrapper[4970]: E1209 12:08:07.814111 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:08 crc kubenswrapper[4970]: I1209 12:08:08.811752 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:08 crc kubenswrapper[4970]: E1209 12:08:08.811964 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:09 crc kubenswrapper[4970]: I1209 12:08:09.811849 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:09 crc kubenswrapper[4970]: I1209 12:08:09.811870 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:09 crc kubenswrapper[4970]: I1209 12:08:09.812001 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:09 crc kubenswrapper[4970]: E1209 12:08:09.812209 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:09 crc kubenswrapper[4970]: E1209 12:08:09.812328 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:09 crc kubenswrapper[4970]: E1209 12:08:09.812402 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:10 crc kubenswrapper[4970]: I1209 12:08:10.811999 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:10 crc kubenswrapper[4970]: E1209 12:08:10.812502 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:11 crc kubenswrapper[4970]: I1209 12:08:11.811906 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:11 crc kubenswrapper[4970]: E1209 12:08:11.812007 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:11 crc kubenswrapper[4970]: I1209 12:08:11.812009 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:11 crc kubenswrapper[4970]: I1209 12:08:11.811907 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:11 crc kubenswrapper[4970]: E1209 12:08:11.812221 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:11 crc kubenswrapper[4970]: E1209 12:08:11.812289 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:12 crc kubenswrapper[4970]: I1209 12:08:12.811730 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:12 crc kubenswrapper[4970]: E1209 12:08:12.811899 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:13 crc kubenswrapper[4970]: I1209 12:08:13.811815 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:13 crc kubenswrapper[4970]: I1209 12:08:13.812059 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:13 crc kubenswrapper[4970]: I1209 12:08:13.812212 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:13 crc kubenswrapper[4970]: E1209 12:08:13.812204 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:13 crc kubenswrapper[4970]: E1209 12:08:13.812724 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:13 crc kubenswrapper[4970]: E1209 12:08:13.812862 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:13 crc kubenswrapper[4970]: I1209 12:08:13.813396 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:08:13 crc kubenswrapper[4970]: E1209 12:08:13.813974 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-sxdvn_openshift-ovn-kubernetes(515fe67d-b7b7-4edb-a2b0-6f8794e8d802)\"" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" Dec 09 12:08:14 crc kubenswrapper[4970]: I1209 12:08:14.812283 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:14 crc kubenswrapper[4970]: E1209 12:08:14.812398 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:15 crc kubenswrapper[4970]: I1209 12:08:15.812593 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:15 crc kubenswrapper[4970]: I1209 12:08:15.812623 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:15 crc kubenswrapper[4970]: E1209 12:08:15.812800 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:15 crc kubenswrapper[4970]: I1209 12:08:15.812618 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:15 crc kubenswrapper[4970]: E1209 12:08:15.812895 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:15 crc kubenswrapper[4970]: E1209 12:08:15.812966 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:16 crc kubenswrapper[4970]: I1209 12:08:16.812021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:16 crc kubenswrapper[4970]: E1209 12:08:16.812953 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:17 crc kubenswrapper[4970]: I1209 12:08:17.811728 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:17 crc kubenswrapper[4970]: I1209 12:08:17.812348 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:17 crc kubenswrapper[4970]: E1209 12:08:17.814272 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:17 crc kubenswrapper[4970]: I1209 12:08:17.814320 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:17 crc kubenswrapper[4970]: E1209 12:08:17.814911 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:17 crc kubenswrapper[4970]: E1209 12:08:17.814769 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:18 crc kubenswrapper[4970]: I1209 12:08:18.811656 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:18 crc kubenswrapper[4970]: E1209 12:08:18.811840 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:19 crc kubenswrapper[4970]: I1209 12:08:19.811724 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:19 crc kubenswrapper[4970]: E1209 12:08:19.812217 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:19 crc kubenswrapper[4970]: I1209 12:08:19.811941 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:19 crc kubenswrapper[4970]: E1209 12:08:19.812359 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:19 crc kubenswrapper[4970]: I1209 12:08:19.811809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:19 crc kubenswrapper[4970]: E1209 12:08:19.812432 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:20 crc kubenswrapper[4970]: I1209 12:08:20.812479 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:20 crc kubenswrapper[4970]: E1209 12:08:20.812668 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.393110 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/1.log" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.393515 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/0.log" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.393557 4970 generic.go:334] "Generic (PLEG): container finished" podID="81da4c74-d93e-4a7a-848a-c3539268368b" containerID="e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee" exitCode=1 Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.393584 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerDied","Data":"e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee"} Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.393622 4970 scope.go:117] "RemoveContainer" containerID="65c007868143da53e25d9b86e4021f26f73f1702e0c1e771fd9f6485a1edda75" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.394157 4970 scope.go:117] "RemoveContainer" containerID="e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee" Dec 09 12:08:21 crc kubenswrapper[4970]: E1209 12:08:21.394493 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-sgdqg_openshift-multus(81da4c74-d93e-4a7a-848a-c3539268368b)\"" pod="openshift-multus/multus-sgdqg" podUID="81da4c74-d93e-4a7a-848a-c3539268368b" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.812325 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.812382 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:21 crc kubenswrapper[4970]: E1209 12:08:21.812444 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:21 crc kubenswrapper[4970]: I1209 12:08:21.812315 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:21 crc kubenswrapper[4970]: E1209 12:08:21.812636 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:21 crc kubenswrapper[4970]: E1209 12:08:21.812709 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:22 crc kubenswrapper[4970]: I1209 12:08:22.400553 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/1.log" Dec 09 12:08:22 crc kubenswrapper[4970]: I1209 12:08:22.811855 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:22 crc kubenswrapper[4970]: E1209 12:08:22.812016 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:23 crc kubenswrapper[4970]: I1209 12:08:23.812369 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:23 crc kubenswrapper[4970]: I1209 12:08:23.812420 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:23 crc kubenswrapper[4970]: E1209 12:08:23.812526 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:23 crc kubenswrapper[4970]: I1209 12:08:23.812385 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:23 crc kubenswrapper[4970]: E1209 12:08:23.812691 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:23 crc kubenswrapper[4970]: E1209 12:08:23.812784 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:24 crc kubenswrapper[4970]: I1209 12:08:24.812340 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:24 crc kubenswrapper[4970]: E1209 12:08:24.812721 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:25 crc kubenswrapper[4970]: I1209 12:08:25.812156 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:25 crc kubenswrapper[4970]: I1209 12:08:25.812725 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:25 crc kubenswrapper[4970]: E1209 12:08:25.812842 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:25 crc kubenswrapper[4970]: I1209 12:08:25.812882 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:25 crc kubenswrapper[4970]: E1209 12:08:25.813286 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:25 crc kubenswrapper[4970]: E1209 12:08:25.813924 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:26 crc kubenswrapper[4970]: I1209 12:08:26.812439 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:26 crc kubenswrapper[4970]: E1209 12:08:26.812559 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:27 crc kubenswrapper[4970]: E1209 12:08:27.780857 4970 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 09 12:08:27 crc kubenswrapper[4970]: I1209 12:08:27.811757 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:27 crc kubenswrapper[4970]: I1209 12:08:27.811901 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:27 crc kubenswrapper[4970]: I1209 12:08:27.812141 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:27 crc kubenswrapper[4970]: E1209 12:08:27.815199 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:27 crc kubenswrapper[4970]: E1209 12:08:27.815308 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:27 crc kubenswrapper[4970]: E1209 12:08:27.815639 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:27 crc kubenswrapper[4970]: E1209 12:08:27.914814 4970 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:08:28 crc kubenswrapper[4970]: I1209 12:08:28.812046 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:28 crc kubenswrapper[4970]: I1209 12:08:28.812773 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:08:28 crc kubenswrapper[4970]: E1209 12:08:28.812788 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.424726 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/3.log" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.427225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerStarted","Data":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.428059 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.481587 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podStartSLOduration=102.481574264 podStartE2EDuration="1m42.481574264s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:29.479926298 +0000 UTC m=+122.040407349" watchObservedRunningTime="2025-12-09 12:08:29.481574264 +0000 UTC m=+122.042055305" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.811899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.812013 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.811899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:29 crc kubenswrapper[4970]: E1209 12:08:29.812092 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:29 crc kubenswrapper[4970]: E1209 12:08:29.812015 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:29 crc kubenswrapper[4970]: E1209 12:08:29.812170 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:29 crc kubenswrapper[4970]: I1209 12:08:29.861289 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cp4b2"] Dec 09 12:08:30 crc kubenswrapper[4970]: I1209 12:08:30.429959 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:30 crc kubenswrapper[4970]: E1209 12:08:30.430123 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:30 crc kubenswrapper[4970]: I1209 12:08:30.811865 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:30 crc kubenswrapper[4970]: E1209 12:08:30.812061 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:31 crc kubenswrapper[4970]: I1209 12:08:31.812375 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:31 crc kubenswrapper[4970]: I1209 12:08:31.812428 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:31 crc kubenswrapper[4970]: E1209 12:08:31.812613 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:31 crc kubenswrapper[4970]: I1209 12:08:31.812675 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:31 crc kubenswrapper[4970]: I1209 12:08:31.813045 4970 scope.go:117] "RemoveContainer" containerID="e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee" Dec 09 12:08:31 crc kubenswrapper[4970]: E1209 12:08:31.813172 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:31 crc kubenswrapper[4970]: E1209 12:08:31.813033 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:32 crc kubenswrapper[4970]: I1209 12:08:32.811615 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:32 crc kubenswrapper[4970]: E1209 12:08:32.812480 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:32 crc kubenswrapper[4970]: E1209 12:08:32.916436 4970 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:08:33 crc kubenswrapper[4970]: I1209 12:08:33.443080 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/1.log" Dec 09 12:08:33 crc kubenswrapper[4970]: I1209 12:08:33.812327 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:33 crc kubenswrapper[4970]: I1209 12:08:33.812402 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:33 crc kubenswrapper[4970]: I1209 12:08:33.812454 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:33 crc kubenswrapper[4970]: E1209 12:08:33.812447 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:33 crc kubenswrapper[4970]: E1209 12:08:33.812536 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:33 crc kubenswrapper[4970]: E1209 12:08:33.812703 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:34 crc kubenswrapper[4970]: I1209 12:08:34.448489 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/1.log" Dec 09 12:08:34 crc kubenswrapper[4970]: I1209 12:08:34.448539 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerStarted","Data":"102b9e9e6ec75dba5b2b5ece21c19f866f26af414d0c0543cb6313adbab44221"} Dec 09 12:08:34 crc kubenswrapper[4970]: I1209 12:08:34.811564 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:34 crc kubenswrapper[4970]: E1209 12:08:34.811767 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:35 crc kubenswrapper[4970]: I1209 12:08:35.811894 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:35 crc kubenswrapper[4970]: I1209 12:08:35.811917 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:35 crc kubenswrapper[4970]: I1209 12:08:35.811917 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:35 crc kubenswrapper[4970]: E1209 12:08:35.812311 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:35 crc kubenswrapper[4970]: E1209 12:08:35.812048 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:35 crc kubenswrapper[4970]: E1209 12:08:35.812422 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:36 crc kubenswrapper[4970]: I1209 12:08:36.812395 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:36 crc kubenswrapper[4970]: E1209 12:08:36.812540 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 12:08:37 crc kubenswrapper[4970]: I1209 12:08:37.812120 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:37 crc kubenswrapper[4970]: I1209 12:08:37.812222 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:37 crc kubenswrapper[4970]: I1209 12:08:37.814241 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:37 crc kubenswrapper[4970]: E1209 12:08:37.814311 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 12:08:37 crc kubenswrapper[4970]: E1209 12:08:37.814386 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 12:08:37 crc kubenswrapper[4970]: E1209 12:08:37.814437 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cp4b2" podUID="5e10a28a-08f5-4679-9d90-532322e9e87f" Dec 09 12:08:38 crc kubenswrapper[4970]: I1209 12:08:38.811876 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:38 crc kubenswrapper[4970]: I1209 12:08:38.814990 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 12:08:38 crc kubenswrapper[4970]: I1209 12:08:38.815427 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.811622 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.811648 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.811853 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.813877 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.814288 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.815357 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 12:08:39 crc kubenswrapper[4970]: I1209 12:08:39.816272 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.390369 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.454195 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g2x6q"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.455894 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.457321 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bx5d"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.464712 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.465617 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhcfv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.466107 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.466870 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.467526 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.469234 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.469754 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.470057 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8txkd"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.470814 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.470915 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.471148 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.471148 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.471155 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.471201 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.471399 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475644 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475704 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475737 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475822 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475924 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475942 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.475993 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476050 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476085 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476144 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476151 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476159 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476232 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.476642 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.477227 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-74sms"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.477758 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478070 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478094 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478200 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478237 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478340 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478397 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478513 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.478700 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479032 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8qz75"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479307 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479827 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479348 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479387 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479427 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.479645 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.484316 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.484625 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.484642 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bx5d"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.484713 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dfqm8"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.484787 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.484882 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.485054 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhcfv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.485070 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.485113 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.492593 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.492651 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.492594 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.492940 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.494517 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-frbcj"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.494839 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.495021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.496458 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.500173 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.501019 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.501295 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.501487 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.501766 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.503450 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv8jl"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.503705 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.503790 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.503718 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.503998 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504139 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504302 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504469 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504557 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504480 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504708 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.504937 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.505074 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.506458 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.507553 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.507732 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.507902 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.507988 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.508086 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.509239 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.509710 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g2x6q"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.509915 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521282 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521373 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521620 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521779 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521924 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521948 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.521967 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.522218 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.524271 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.524783 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.525874 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.540611 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.547435 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.551208 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.559201 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.559550 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.559643 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.559859 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.562523 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.563238 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.563499 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.563752 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.563960 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.564300 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.564975 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.574747 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qnl9t"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.575423 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.575653 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.575910 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.575941 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.575986 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.576538 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-zwlc4"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.576969 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.587319 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.589563 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fctsp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.591926 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.593598 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.593772 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.594536 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.594988 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.595228 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.595745 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.595770 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.595939 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.596149 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.596779 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.598271 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.598388 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.598553 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.611569 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.612041 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.612824 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.613109 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.614023 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.614505 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.616169 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.616390 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-wqfw9"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.630311 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-87shv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.630385 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.632358 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.632756 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.650764 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.650772 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-image-import-ca\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.650921 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wbb\" (UniqueName: \"kubernetes.io/projected/95531044-b9d4-4231-ac7c-be2850f2cbfd-kube-api-access-m8wbb\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.650947 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.650975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-config\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651005 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651028 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99jm7\" (UniqueName: \"kubernetes.io/projected/1dced0a5-18f4-4cf6-b497-1d8dad926744-kube-api-access-99jm7\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651135 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-config\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651180 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9j2\" (UniqueName: \"kubernetes.io/projected/6049a17c-c44f-4d71-a42b-7be7a525fa90-kube-api-access-xm9j2\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651213 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651232 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2cc0b59-1828-4d02-b898-a260712479fa-serving-cert\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651266 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-serving-cert\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651303 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-audit-dir\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651317 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-etcd-client\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651338 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gznwz\" (UniqueName: \"kubernetes.io/projected/d2cc0b59-1828-4d02-b898-a260712479fa-kube-api-access-gznwz\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.651364 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.151351247 +0000 UTC m=+136.711832368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651388 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-config\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651413 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-tls\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651435 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651458 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aa8c9620-293f-4658-9c69-0b843396613b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651477 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skr27\" (UniqueName: \"kubernetes.io/projected/aa8c9620-293f-4658-9c69-0b843396613b-kube-api-access-skr27\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651498 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-serving-cert\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651517 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-client-ca\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651542 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzbms\" (UniqueName: \"kubernetes.io/projected/a96798ca-3ac0-4957-887c-763502663335-kube-api-access-tzbms\") pod \"downloads-7954f5f757-frbcj\" (UID: \"a96798ca-3ac0-4957-887c-763502663335\") " pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651564 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/051b0190-5f8c-42e4-af6c-8a5dd401ef52-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651616 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/051b0190-5f8c-42e4-af6c-8a5dd401ef52-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651640 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-config\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651659 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651716 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651736 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651753 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95531044-b9d4-4231-ac7c-be2850f2cbfd-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651777 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/932056a2-9503-407e-9502-6c90b5ebe586-machine-approver-tls\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651809 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2cc0b59-1828-4d02-b898-a260712479fa-config\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651832 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-audit-policies\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651870 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-audit-policies\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651894 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-audit\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lgj6\" (UniqueName: \"kubernetes.io/projected/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-kube-api-access-9lgj6\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651948 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932056a2-9503-407e-9502-6c90b5ebe586-config\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.651980 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652005 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652040 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-etcd-client\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652066 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-config\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71161079-7313-4f68-b716-a4650e0af898-audit-dir\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652120 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-trusted-ca\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652146 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652184 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652209 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmmx6\" (UniqueName: \"kubernetes.io/projected/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-kube-api-access-fmmx6\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652229 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcrv2\" (UniqueName: \"kubernetes.io/projected/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-kube-api-access-jcrv2\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652274 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-service-ca-bundle\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652327 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-encryption-config\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652371 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6049a17c-c44f-4d71-a42b-7be7a525fa90-serving-cert\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652395 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652410 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-node-pullsecrets\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652436 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652469 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/932056a2-9503-407e-9502-6c90b5ebe586-auth-proxy-config\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652522 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-images\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652547 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2cc0b59-1828-4d02-b898-a260712479fa-trusted-ca\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652574 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-encryption-config\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652599 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-audit-dir\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652633 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652665 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-certificates\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652689 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652711 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652742 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hvb\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-kube-api-access-68hvb\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652764 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dced0a5-18f4-4cf6-b497-1d8dad926744-serving-cert\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652804 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652829 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lmf2\" (UniqueName: \"kubernetes.io/projected/71161079-7313-4f68-b716-a4650e0af898-kube-api-access-5lmf2\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652914 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652944 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.652969 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653012 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8c9620-293f-4658-9c69-0b843396613b-serving-cert\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653033 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653040 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653065 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gx84\" (UniqueName: \"kubernetes.io/projected/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-kube-api-access-2gx84\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653091 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653098 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653150 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-bound-sa-token\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653164 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653176 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-client-ca\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ds95d\" (UniqueName: \"kubernetes.io/projected/932056a2-9503-407e-9502-6c90b5ebe586-kube-api-access-ds95d\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.653522 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.655301 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.655807 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.657712 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.657841 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.658591 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.664155 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.664308 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-62724"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.664492 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.664833 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.665339 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n9dr5"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.666193 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.672222 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjsvb"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.672806 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.672881 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.674907 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.675319 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.675583 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.675642 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.676461 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.679406 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.679892 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.681512 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.683781 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.683943 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.684459 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.684615 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.684943 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xgfp5"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.685330 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8qz75"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.685349 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-74sms"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.685390 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.685435 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.685561 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.689407 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.692670 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dfqm8"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.695667 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8txkd"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.698440 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.700072 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.702323 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv8jl"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.702385 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.702402 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.709567 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-87shv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.711436 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-frbcj"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.712009 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.712978 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.717151 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qnl9t"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.719363 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.719598 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.721650 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fctsp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.722053 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.734828 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.739415 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.739535 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.740440 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-xnzx7"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.741433 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.743021 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.745049 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n9dr5"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.746223 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjsvb"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.748868 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.750167 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.751187 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.752233 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xgfp5"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.753463 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.754898 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.755771 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wqfw9"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757097 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757608 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.757707 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.257686327 +0000 UTC m=+136.818167378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757815 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757851 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-bound-sa-token\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757875 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-client-ca\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757898 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-image-import-ca\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxrq\" (UniqueName: \"kubernetes.io/projected/f7b83f2f-f485-4b2f-9fc8-87581c835972-kube-api-access-jwxrq\") pod \"cluster-samples-operator-665b6dd947-r6fwb\" (UID: \"f7b83f2f-f485-4b2f-9fc8-87581c835972\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757953 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.757974 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99jm7\" (UniqueName: \"kubernetes.io/projected/1dced0a5-18f4-4cf6-b497-1d8dad926744-kube-api-access-99jm7\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758017 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-config\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758045 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5156ea95-86b1-44f3-979a-87fd807945c7-config\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758077 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758107 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg864\" (UniqueName: \"kubernetes.io/projected/efd00913-5fd9-4268-b753-a449b7d25a16-kube-api-access-xg864\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758153 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpzp\" (UniqueName: \"kubernetes.io/projected/5503d403-7960-40ee-86be-586e5c03a682-kube-api-access-7tpzp\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758188 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-config\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758211 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758292 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30c5a311-d7ec-49f9-9de2-535aed0af228-metrics-tls\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758320 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c5a311-d7ec-49f9-9de2-535aed0af228-trusted-ca\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-etcd-client\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758391 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758399 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b5af52df-627e-4b20-a29e-3f66fbafe5b6-tmpfs\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758437 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-audit-dir\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758470 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-default-certificate\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758503 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-oauth-config\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758528 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-9gq7t"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-config\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758575 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758609 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85dwm\" (UniqueName: \"kubernetes.io/projected/cd84ab53-f698-428b-b831-cf6605d01965-kube-api-access-85dwm\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758638 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aa8c9620-293f-4658-9c69-0b843396613b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758663 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skr27\" (UniqueName: \"kubernetes.io/projected/aa8c9620-293f-4658-9c69-0b843396613b-kube-api-access-skr27\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758689 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8wl\" (UniqueName: \"kubernetes.io/projected/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-kube-api-access-zb8wl\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758712 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzbms\" (UniqueName: \"kubernetes.io/projected/a96798ca-3ac0-4957-887c-763502663335-kube-api-access-tzbms\") pod \"downloads-7954f5f757-frbcj\" (UID: \"a96798ca-3ac0-4957-887c-763502663335\") " pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758738 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/051b0190-5f8c-42e4-af6c-8a5dd401ef52-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-serving-cert\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758793 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5503d403-7960-40ee-86be-586e5c03a682-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758829 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm25f\" (UniqueName: \"kubernetes.io/projected/8709261f-420d-4fee-908c-a7e1074959cb-kube-api-access-fm25f\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758850 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbeabb7-793d-476c-98f3-e559f7342f1f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758874 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcbeabb7-793d-476c-98f3-e559f7342f1f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758899 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-metrics-certs\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758923 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2cc0b59-1828-4d02-b898-a260712479fa-config\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758945 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-audit-policies\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758967 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5156ea95-86b1-44f3-979a-87fd807945c7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.758988 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96aedd83-c6c1-4b08-8d47-43cd63aaae68-secret-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759011 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759009 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759192 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5503d403-7960-40ee-86be-586e5c03a682-proxy-tls\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759211 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85e77654-6af7-4c35-bf92-f7c67d49dd2d-metrics-tls\") pod \"dns-operator-744455d44c-qnl9t\" (UID: \"85e77654-6af7-4c35-bf92-f7c67d49dd2d\") " pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759231 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8709261f-420d-4fee-908c-a7e1074959cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759269 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932056a2-9503-407e-9502-6c90b5ebe586-config\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759288 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8225c927-d82e-41f7-a335-f65c50f6f4ce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fctsp\" (UID: \"8225c927-d82e-41f7-a335-f65c50f6f4ce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759306 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lgj6\" (UniqueName: \"kubernetes.io/projected/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-kube-api-access-9lgj6\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759314 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rm2zp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759325 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759345 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71161079-7313-4f68-b716-a4650e0af898-audit-dir\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759360 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5af52df-627e-4b20-a29e-3f66fbafe5b6-webhook-cert\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759377 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgpx6\" (UniqueName: \"kubernetes.io/projected/94df2ee0-4918-4bb9-b11f-4da797c1376f-kube-api-access-kgpx6\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759395 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-trusted-ca\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759412 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759429 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcrv2\" (UniqueName: \"kubernetes.io/projected/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-kube-api-access-jcrv2\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759447 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-encryption-config\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759463 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-serving-cert\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759478 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5156ea95-86b1-44f3-979a-87fd807945c7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759494 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5af52df-627e-4b20-a29e-3f66fbafe5b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759516 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759531 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/de05fec3-6b27-4c6f-9d72-ed1a35954ba1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qmvms\" (UID: \"de05fec3-6b27-4c6f-9d72-ed1a35954ba1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759550 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-node-pullsecrets\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759567 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759586 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-service-ca\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759604 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-encryption-config\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759623 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759640 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759655 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hvb\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-kube-api-access-68hvb\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759676 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759698 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lmf2\" (UniqueName: \"kubernetes.io/projected/71161079-7313-4f68-b716-a4650e0af898-kube-api-access-5lmf2\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759727 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759746 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759762 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efd00913-5fd9-4268-b753-a449b7d25a16-serving-cert\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759780 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8c9620-293f-4658-9c69-0b843396613b-serving-cert\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759796 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gx84\" (UniqueName: \"kubernetes.io/projected/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-kube-api-access-2gx84\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759813 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759831 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759848 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7b83f2f-f485-4b2f-9fc8-87581c835972-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r6fwb\" (UID: \"f7b83f2f-f485-4b2f-9fc8-87581c835972\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759864 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759874 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ds95d\" (UniqueName: \"kubernetes.io/projected/932056a2-9503-407e-9502-6c90b5ebe586-kube-api-access-ds95d\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759890 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-service-ca\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759908 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wbb\" (UniqueName: \"kubernetes.io/projected/95531044-b9d4-4231-ac7c-be2850f2cbfd-kube-api-access-m8wbb\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759925 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759941 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrjg\" (UniqueName: \"kubernetes.io/projected/8225c927-d82e-41f7-a335-f65c50f6f4ce-kube-api-access-sfrjg\") pod \"multus-admission-controller-857f4d67dd-fctsp\" (UID: \"8225c927-d82e-41f7-a335-f65c50f6f4ce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759960 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9j2\" (UniqueName: \"kubernetes.io/projected/6049a17c-c44f-4d71-a42b-7be7a525fa90-kube-api-access-xm9j2\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.759981 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-oauth-serving-cert\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760011 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a65f95f-2f95-4750-a78a-08354df01f6d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760038 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt5kb\" (UniqueName: \"kubernetes.io/projected/2537d42d-31de-48b8-ae7a-afaba0d36376-kube-api-access-dt5kb\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760058 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-serving-cert\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760075 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2cc0b59-1828-4d02-b898-a260712479fa-serving-cert\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760091 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-trusted-ca-bundle\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760109 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgjlq\" (UniqueName: \"kubernetes.io/projected/0f48b902-332e-4fa2-9724-a947a6c8ad3a-kube-api-access-pgjlq\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760125 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42q7s\" (UniqueName: \"kubernetes.io/projected/de05fec3-6b27-4c6f-9d72-ed1a35954ba1-kube-api-access-42q7s\") pod \"package-server-manager-789f6589d5-qmvms\" (UID: \"de05fec3-6b27-4c6f-9d72-ed1a35954ba1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760142 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gznwz\" (UniqueName: \"kubernetes.io/projected/d2cc0b59-1828-4d02-b898-a260712479fa-kube-api-access-gznwz\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760160 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c115cc0e-4491-4b7b-83f3-349433ce3b08-service-ca-bundle\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-tls\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760196 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-client\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760211 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdkc\" (UniqueName: \"kubernetes.io/projected/9a65f95f-2f95-4750-a78a-08354df01f6d-kube-api-access-6gdkc\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760232 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-client-ca\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760266 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/051b0190-5f8c-42e4-af6c-8a5dd401ef52-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760282 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-config\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760297 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760314 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-srv-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760334 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760349 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760354 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbeabb7-793d-476c-98f3-e559f7342f1f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760373 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5503d403-7960-40ee-86be-586e5c03a682-images\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760389 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd84ab53-f698-428b-b831-cf6605d01965-serving-cert\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760407 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rzx2\" (UniqueName: \"kubernetes.io/projected/c115cc0e-4491-4b7b-83f3-349433ce3b08-kube-api-access-4rzx2\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760425 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/932056a2-9503-407e-9502-6c90b5ebe586-machine-approver-tls\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760441 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95531044-b9d4-4231-ac7c-be2850f2cbfd-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760458 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-audit-policies\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760478 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-audit\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760499 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c5a311-d7ec-49f9-9de2-535aed0af228-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760515 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f48b902-332e-4fa2-9724-a947a6c8ad3a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760532 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-stats-auth\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f48b902-332e-4fa2-9724-a947a6c8ad3a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760567 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760584 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-config\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760598 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-console-config\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760616 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-etcd-client\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760634 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760684 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2zq\" (UniqueName: \"kubernetes.io/projected/85e77654-6af7-4c35-bf92-f7c67d49dd2d-kube-api-access-sg2zq\") pod \"dns-operator-744455d44c-qnl9t\" (UID: \"85e77654-6af7-4c35-bf92-f7c67d49dd2d\") " pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760731 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmmx6\" (UniqueName: \"kubernetes.io/projected/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-kube-api-access-fmmx6\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760750 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-service-ca-bundle\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760780 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6049a17c-c44f-4d71-a42b-7be7a525fa90-serving-cert\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760824 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760883 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/932056a2-9503-407e-9502-6c90b5ebe586-auth-proxy-config\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760956 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd84ab53-f698-428b-b831-cf6605d01965-config\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.760981 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-config\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761005 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-images\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761015 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-client-ca\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761029 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-audit-dir\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761056 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt4nm\" (UniqueName: \"kubernetes.io/projected/b5af52df-627e-4b20-a29e-3f66fbafe5b6-kube-api-access-mt4nm\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761081 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2cc0b59-1828-4d02-b898-a260712479fa-trusted-ca\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761106 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a65f95f-2f95-4750-a78a-08354df01f6d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761131 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94df2ee0-4918-4bb9-b11f-4da797c1376f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761159 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-certificates\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761183 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5swxf\" (UniqueName: \"kubernetes.io/projected/96aedd83-c6c1-4b08-8d47-43cd63aaae68-kube-api-access-5swxf\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4b7v\" (UniqueName: \"kubernetes.io/projected/30c5a311-d7ec-49f9-9de2-535aed0af228-kube-api-access-x4b7v\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761234 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a65f95f-2f95-4750-a78a-08354df01f6d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761267 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-ca\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761289 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dced0a5-18f4-4cf6-b497-1d8dad926744-serving-cert\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761315 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cskbh\" (UniqueName: \"kubernetes.io/projected/c8828226-d424-4df9-b753-d3af05d4f07e-kube-api-access-cskbh\") pod \"migrator-59844c95c7-wzw99\" (UID: \"c8828226-d424-4df9-b753-d3af05d4f07e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761339 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csc7n\" (UniqueName: \"kubernetes.io/projected/2becf945-4a01-49fd-a2c5-788632898a32-kube-api-access-csc7n\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761356 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94df2ee0-4918-4bb9-b11f-4da797c1376f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761374 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.761883 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xnzx7"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.762508 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-image-import-ca\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.762868 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.262851602 +0000 UTC m=+136.823332663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.763229 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/932056a2-9503-407e-9502-6c90b5ebe586-config\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.763632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/aa8c9620-293f-4658-9c69-0b843396613b-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.763704 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.763903 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-config\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.764217 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2cc0b59-1828-4d02-b898-a260712479fa-config\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.764324 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-config\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.764550 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.764920 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-audit-policies\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.765513 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2cc0b59-1828-4d02-b898-a260712479fa-trusted-ca\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.765635 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-trusted-ca\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.765970 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71161079-7313-4f68-b716-a4650e0af898-audit-dir\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.766679 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.766716 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-etcd-serving-ca\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.766749 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-node-pullsecrets\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.767106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.767521 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/932056a2-9503-407e-9502-6c90b5ebe586-auth-proxy-config\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.767663 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-certificates\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.767682 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-audit-policies\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.767724 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.767944 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/051b0190-5f8c-42e4-af6c-8a5dd401ef52-ca-trust-extracted\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.768061 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-client-ca\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.768510 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-audit\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.768951 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-tls\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769043 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769115 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-service-ca-bundle\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769531 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769693 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-config\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769759 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-audit-dir\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769791 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-audit-dir\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.769868 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-config\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.770608 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-images\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.770680 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.770691 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-serving-cert\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771046 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9gq7t"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771103 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-config\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771117 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rm2zp"] Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771121 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771081 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/051b0190-5f8c-42e4-af6c-8a5dd401ef52-installation-pull-secrets\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771457 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6049a17c-c44f-4d71-a42b-7be7a525fa90-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.771746 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.772191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.772316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95531044-b9d4-4231-ac7c-be2850f2cbfd-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.772327 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.772883 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.773416 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.773514 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa8c9620-293f-4658-9c69-0b843396613b-serving-cert\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.773598 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-etcd-client\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.773630 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dced0a5-18f4-4cf6-b497-1d8dad926744-serving-cert\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.773798 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-encryption-config\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.773863 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-serving-cert\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.774036 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-etcd-client\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.774197 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.774210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/932056a2-9503-407e-9502-6c90b5ebe586-machine-approver-tls\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.774198 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.775028 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2cc0b59-1828-4d02-b898-a260712479fa-serving-cert\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.775064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6049a17c-c44f-4d71-a42b-7be7a525fa90-serving-cert\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.775233 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-encryption-config\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.776874 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.778558 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.798787 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.818382 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.839304 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.858819 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.861835 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.861949 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c5a311-d7ec-49f9-9de2-535aed0af228-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.861989 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.361964762 +0000 UTC m=+136.922445813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862029 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f48b902-332e-4fa2-9724-a947a6c8ad3a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862064 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-stats-auth\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862087 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f48b902-332e-4fa2-9724-a947a6c8ad3a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-console-config\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862143 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2zq\" (UniqueName: \"kubernetes.io/projected/85e77654-6af7-4c35-bf92-f7c67d49dd2d-kube-api-access-sg2zq\") pod \"dns-operator-744455d44c-qnl9t\" (UID: \"85e77654-6af7-4c35-bf92-f7c67d49dd2d\") " pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862197 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862226 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd84ab53-f698-428b-b831-cf6605d01965-config\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862287 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-config\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862316 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt4nm\" (UniqueName: \"kubernetes.io/projected/b5af52df-627e-4b20-a29e-3f66fbafe5b6-kube-api-access-mt4nm\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862470 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a65f95f-2f95-4750-a78a-08354df01f6d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862498 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94df2ee0-4918-4bb9-b11f-4da797c1376f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5swxf\" (UniqueName: \"kubernetes.io/projected/96aedd83-c6c1-4b08-8d47-43cd63aaae68-kube-api-access-5swxf\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862551 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4b7v\" (UniqueName: \"kubernetes.io/projected/30c5a311-d7ec-49f9-9de2-535aed0af228-kube-api-access-x4b7v\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862572 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a65f95f-2f95-4750-a78a-08354df01f6d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862593 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-ca\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862618 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cskbh\" (UniqueName: \"kubernetes.io/projected/c8828226-d424-4df9-b753-d3af05d4f07e-kube-api-access-cskbh\") pod \"migrator-59844c95c7-wzw99\" (UID: \"c8828226-d424-4df9-b753-d3af05d4f07e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862642 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csc7n\" (UniqueName: \"kubernetes.io/projected/2becf945-4a01-49fd-a2c5-788632898a32-kube-api-access-csc7n\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862667 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94df2ee0-4918-4bb9-b11f-4da797c1376f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862711 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwxrq\" (UniqueName: \"kubernetes.io/projected/f7b83f2f-f485-4b2f-9fc8-87581c835972-kube-api-access-jwxrq\") pod \"cluster-samples-operator-665b6dd947-r6fwb\" (UID: \"f7b83f2f-f485-4b2f-9fc8-87581c835972\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862739 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862773 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5156ea95-86b1-44f3-979a-87fd807945c7-config\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862797 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862823 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg864\" (UniqueName: \"kubernetes.io/projected/efd00913-5fd9-4268-b753-a449b7d25a16-kube-api-access-xg864\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862846 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tpzp\" (UniqueName: \"kubernetes.io/projected/5503d403-7960-40ee-86be-586e5c03a682-kube-api-access-7tpzp\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.862996 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30c5a311-d7ec-49f9-9de2-535aed0af228-metrics-tls\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.863029 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.363013575 +0000 UTC m=+136.923494626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863057 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c5a311-d7ec-49f9-9de2-535aed0af228-trusted-ca\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863085 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b5af52df-627e-4b20-a29e-3f66fbafe5b6-tmpfs\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863102 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-oauth-config\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-default-certificate\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863139 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85dwm\" (UniqueName: \"kubernetes.io/projected/cd84ab53-f698-428b-b831-cf6605d01965-kube-api-access-85dwm\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863163 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8wl\" (UniqueName: \"kubernetes.io/projected/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-kube-api-access-zb8wl\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863213 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5503d403-7960-40ee-86be-586e5c03a682-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863236 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm25f\" (UniqueName: \"kubernetes.io/projected/8709261f-420d-4fee-908c-a7e1074959cb-kube-api-access-fm25f\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864689 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbeabb7-793d-476c-98f3-e559f7342f1f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.863872 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b5af52df-627e-4b20-a29e-3f66fbafe5b6-tmpfs\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864421 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5503d403-7960-40ee-86be-586e5c03a682-auth-proxy-config\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864773 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcbeabb7-793d-476c-98f3-e559f7342f1f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864794 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-metrics-certs\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864815 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5156ea95-86b1-44f3-979a-87fd807945c7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864834 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8709261f-420d-4fee-908c-a7e1074959cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864532 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9a65f95f-2f95-4750-a78a-08354df01f6d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864034 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5156ea95-86b1-44f3-979a-87fd807945c7-config\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864854 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96aedd83-c6c1-4b08-8d47-43cd63aaae68-secret-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864941 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864959 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5503d403-7960-40ee-86be-586e5c03a682-proxy-tls\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864974 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85e77654-6af7-4c35-bf92-f7c67d49dd2d-metrics-tls\") pod \"dns-operator-744455d44c-qnl9t\" (UID: \"85e77654-6af7-4c35-bf92-f7c67d49dd2d\") " pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.864997 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8225c927-d82e-41f7-a335-f65c50f6f4ce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fctsp\" (UID: \"8225c927-d82e-41f7-a335-f65c50f6f4ce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.865020 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5af52df-627e-4b20-a29e-3f66fbafe5b6-webhook-cert\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.865037 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgpx6\" (UniqueName: \"kubernetes.io/projected/94df2ee0-4918-4bb9-b11f-4da797c1376f-kube-api-access-kgpx6\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.865075 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-serving-cert\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.865092 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5156ea95-86b1-44f3-979a-87fd807945c7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.865106 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5af52df-627e-4b20-a29e-3f66fbafe5b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.865129 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/de05fec3-6b27-4c6f-9d72-ed1a35954ba1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qmvms\" (UID: \"de05fec3-6b27-4c6f-9d72-ed1a35954ba1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866138 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-stats-auth\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866340 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-service-ca\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866398 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efd00913-5fd9-4268-b753-a449b7d25a16-serving-cert\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866434 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7b83f2f-f485-4b2f-9fc8-87581c835972-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r6fwb\" (UID: \"f7b83f2f-f485-4b2f-9fc8-87581c835972\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-service-ca\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866540 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfrjg\" (UniqueName: \"kubernetes.io/projected/8225c927-d82e-41f7-a335-f65c50f6f4ce-kube-api-access-sfrjg\") pod \"multus-admission-controller-857f4d67dd-fctsp\" (UID: \"8225c927-d82e-41f7-a335-f65c50f6f4ce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866577 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-oauth-serving-cert\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866602 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a65f95f-2f95-4750-a78a-08354df01f6d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866629 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt5kb\" (UniqueName: \"kubernetes.io/projected/2537d42d-31de-48b8-ae7a-afaba0d36376-kube-api-access-dt5kb\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866661 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-trusted-ca-bundle\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866689 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgjlq\" (UniqueName: \"kubernetes.io/projected/0f48b902-332e-4fa2-9724-a947a6c8ad3a-kube-api-access-pgjlq\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866722 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42q7s\" (UniqueName: \"kubernetes.io/projected/de05fec3-6b27-4c6f-9d72-ed1a35954ba1-kube-api-access-42q7s\") pod \"package-server-manager-789f6589d5-qmvms\" (UID: \"de05fec3-6b27-4c6f-9d72-ed1a35954ba1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866760 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c115cc0e-4491-4b7b-83f3-349433ce3b08-service-ca-bundle\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866795 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdkc\" (UniqueName: \"kubernetes.io/projected/9a65f95f-2f95-4750-a78a-08354df01f6d-kube-api-access-6gdkc\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866820 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-client\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.866996 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-srv-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.867034 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbeabb7-793d-476c-98f3-e559f7342f1f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.867053 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5503d403-7960-40ee-86be-586e5c03a682-images\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.867072 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd84ab53-f698-428b-b831-cf6605d01965-serving-cert\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.867089 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rzx2\" (UniqueName: \"kubernetes.io/projected/c115cc0e-4491-4b7b-83f3-349433ce3b08-kube-api-access-4rzx2\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.867752 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c115cc0e-4491-4b7b-83f3-349433ce3b08-service-ca-bundle\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.868537 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-default-certificate\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.868562 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/9a65f95f-2f95-4750-a78a-08354df01f6d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.868988 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/85e77654-6af7-4c35-bf92-f7c67d49dd2d-metrics-tls\") pod \"dns-operator-744455d44c-qnl9t\" (UID: \"85e77654-6af7-4c35-bf92-f7c67d49dd2d\") " pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.869228 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5156ea95-86b1-44f3-979a-87fd807945c7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.869585 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c115cc0e-4491-4b7b-83f3-349433ce3b08-metrics-certs\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.870235 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5af52df-627e-4b20-a29e-3f66fbafe5b6-webhook-cert\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.870304 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b5af52df-627e-4b20-a29e-3f66fbafe5b6-apiservice-cert\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.870558 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/f7b83f2f-f485-4b2f-9fc8-87581c835972-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r6fwb\" (UID: \"f7b83f2f-f485-4b2f-9fc8-87581c835972\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.878609 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.899482 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.909013 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96aedd83-c6c1-4b08-8d47-43cd63aaae68-secret-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.910482 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-profile-collector-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.918643 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.938021 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.958973 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.967630 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.967739 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.467715693 +0000 UTC m=+137.028196744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: E1209 12:08:43.968981 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.468971273 +0000 UTC m=+137.029452324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.968992 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8225c927-d82e-41f7-a335-f65c50f6f4ce-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fctsp\" (UID: \"8225c927-d82e-41f7-a335-f65c50f6f4ce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.969374 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:43 crc kubenswrapper[4970]: I1209 12:08:43.999288 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.017955 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.038785 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.044095 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f48b902-332e-4fa2-9724-a947a6c8ad3a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.058336 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.071157 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.071313 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.571290525 +0000 UTC m=+137.131771586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.071532 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.072030 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.572010528 +0000 UTC m=+137.132491639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.079654 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.085582 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f48b902-332e-4fa2-9724-a947a6c8ad3a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.098434 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.119711 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.128891 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-oauth-serving-cert\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.138377 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.158958 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.169740 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-serving-cert\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.172638 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.172880 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.672851673 +0000 UTC m=+137.233332724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.173031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.173390 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.673371049 +0000 UTC m=+137.233852120 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.178617 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.184089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-console-config\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.197771 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.208037 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-service-ca\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.218066 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.227843 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-oauth-config\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.238953 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.243716 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd84ab53-f698-428b-b831-cf6605d01965-config\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.258387 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.275489 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.275780 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.775742473 +0000 UTC m=+137.336223574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.276076 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.276761 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.776737115 +0000 UTC m=+137.337218216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.286420 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.288667 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-trusted-ca-bundle\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.299350 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.312129 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd84ab53-f698-428b-b831-cf6605d01965-serving-cert\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.319457 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.339214 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.348595 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5503d403-7960-40ee-86be-586e5c03a682-images\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.359082 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.377729 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.377897 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.877876979 +0000 UTC m=+137.438358030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.378358 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.378736 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.878726356 +0000 UTC m=+137.439207407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.379187 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.388876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5503d403-7960-40ee-86be-586e5c03a682-proxy-tls\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.403580 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.407709 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/30c5a311-d7ec-49f9-9de2-535aed0af228-metrics-tls\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.419333 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.439985 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.459102 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.478842 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.478997 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.978975971 +0000 UTC m=+137.539457032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.479312 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.479602 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:44.979592151 +0000 UTC m=+137.540073202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.484883 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.495550 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/30c5a311-d7ec-49f9-9de2-535aed0af228-trusted-ca\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.498391 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.509404 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/de05fec3-6b27-4c6f-9d72-ed1a35954ba1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qmvms\" (UID: \"de05fec3-6b27-4c6f-9d72-ed1a35954ba1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.518193 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.539081 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.558885 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.579126 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.580905 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.581002 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.080984073 +0000 UTC m=+137.641465134 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.581180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.581507 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.08149775 +0000 UTC m=+137.641978801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.587950 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-service-ca\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.598756 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.609978 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efd00913-5fd9-4268-b753-a449b7d25a16-serving-cert\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.618851 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.631145 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-client\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.639474 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.658908 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.676650 4970 request.go:700] Waited for 1.009971847s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.679278 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.683614 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.683826 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.183803301 +0000 UTC m=+137.744284342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.684238 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.684667 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.184658738 +0000 UTC m=+137.745139789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.698935 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.703642 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-config\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.720324 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.723592 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/efd00913-5fd9-4268-b753-a449b7d25a16-etcd-ca\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.739984 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.759152 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.778936 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.785693 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.785873 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.285851654 +0000 UTC m=+137.846332705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.799093 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.819463 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.838684 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.859192 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.863472 4970 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.863606 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca podName:2becf945-4a01-49fd-a2c5-788632898a32 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.363568782 +0000 UTC m=+137.924049883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca") pod "marketplace-operator-79b997595-xgfp5" (UID: "2becf945-4a01-49fd-a2c5-788632898a32") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.863986 4970 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.864070 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/94df2ee0-4918-4bb9-b11f-4da797c1376f-config podName:94df2ee0-4918-4bb9-b11f-4da797c1376f nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.364048247 +0000 UTC m=+137.924529338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/94df2ee0-4918-4bb9-b11f-4da797c1376f-config") pod "kube-storage-version-migrator-operator-b67b599dd-6g6xm" (UID: "94df2ee0-4918-4bb9-b11f-4da797c1376f") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.864159 4970 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.864223 4970 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.864233 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/94df2ee0-4918-4bb9-b11f-4da797c1376f-serving-cert podName:94df2ee0-4918-4bb9-b11f-4da797c1376f nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.364212052 +0000 UTC m=+137.924693153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/94df2ee0-4918-4bb9-b11f-4da797c1376f-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-6g6xm" (UID: "94df2ee0-4918-4bb9-b11f-4da797c1376f") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.864320 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume podName:96aedd83-c6c1-4b08-8d47-43cd63aaae68 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.364305215 +0000 UTC m=+137.924786266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume") pod "collect-profiles-29421360-vvgqb" (UID: "96aedd83-c6c1-4b08-8d47-43cd63aaae68") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.865623 4970 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.865710 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics podName:2becf945-4a01-49fd-a2c5-788632898a32 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.365686589 +0000 UTC m=+137.926167690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics") pod "marketplace-operator-79b997595-xgfp5" (UID: "2becf945-4a01-49fd-a2c5-788632898a32") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.865758 4970 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.865824 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fcbeabb7-793d-476c-98f3-e559f7342f1f-serving-cert podName:fcbeabb7-793d-476c-98f3-e559f7342f1f nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.365804593 +0000 UTC m=+137.926285694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/fcbeabb7-793d-476c-98f3-e559f7342f1f-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-25m9v" (UID: "fcbeabb7-793d-476c-98f3-e559f7342f1f") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.865858 4970 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.865914 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8709261f-420d-4fee-908c-a7e1074959cb-control-plane-machine-set-operator-tls podName:8709261f-420d-4fee-908c-a7e1074959cb nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.365896276 +0000 UTC m=+137.926377387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/8709261f-420d-4fee-908c-a7e1074959cb-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-8lbhv" (UID: "8709261f-420d-4fee-908c-a7e1074959cb") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.867762 4970 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.867789 4970 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.867852 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-srv-cert podName:8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7 nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.367833568 +0000 UTC m=+137.928314659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-srv-cert") pod "catalog-operator-68c6474976-jjr45" (UID: "8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7") : failed to sync secret cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.867881 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fcbeabb7-793d-476c-98f3-e559f7342f1f-config podName:fcbeabb7-793d-476c-98f3-e559f7342f1f nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.367864329 +0000 UTC m=+137.928345430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fcbeabb7-793d-476c-98f3-e559f7342f1f-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-25m9v" (UID: "fcbeabb7-793d-476c-98f3-e559f7342f1f") : failed to sync configmap cache: timed out waiting for the condition Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.878658 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.888043 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.888522 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.388511677 +0000 UTC m=+137.948992728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.898494 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.919361 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.938926 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.959037 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.978931 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.989401 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.989628 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.48959694 +0000 UTC m=+138.050078001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.989796 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:44 crc kubenswrapper[4970]: E1209 12:08:44.990429 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.490417446 +0000 UTC m=+138.050898497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:44 crc kubenswrapper[4970]: I1209 12:08:44.998476 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.019801 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.039285 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.059148 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.079699 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.090916 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.091079 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.591057394 +0000 UTC m=+138.151538455 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.092316 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.092771 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.592755818 +0000 UTC m=+138.153236889 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.099611 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.119572 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.138818 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.158697 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.178681 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.193312 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.193450 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.693432448 +0000 UTC m=+138.253913499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.193540 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.193841 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.693831601 +0000 UTC m=+138.254312752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.199617 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.219578 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.238796 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.259557 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.279605 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.298071 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.298403 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.798367213 +0000 UTC m=+138.358848314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.298682 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.299134 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.799116827 +0000 UTC m=+138.359597908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.308859 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.318516 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.358636 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.378372 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.399394 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.399917 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.400073 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.900047725 +0000 UTC m=+138.460528806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400344 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400462 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94df2ee0-4918-4bb9-b11f-4da797c1376f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400584 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94df2ee0-4918-4bb9-b11f-4da797c1376f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400680 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400729 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400888 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbeabb7-793d-476c-98f3-e559f7342f1f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400946 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8709261f-420d-4fee-908c-a7e1074959cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.400985 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.401297 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-srv-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.401350 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbeabb7-793d-476c-98f3-e559f7342f1f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.401470 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.401971 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:45.901949976 +0000 UTC m=+138.462431067 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.402464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94df2ee0-4918-4bb9-b11f-4da797c1376f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.402829 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcbeabb7-793d-476c-98f3-e559f7342f1f-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.402993 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.404052 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/94df2ee0-4918-4bb9-b11f-4da797c1376f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.405533 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fcbeabb7-793d-476c-98f3-e559f7342f1f-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.408340 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.408817 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8709261f-420d-4fee-908c-a7e1074959cb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.408930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-srv-cert\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.419062 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.459691 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.461467 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-bound-sa-token\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.479754 4970 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.499521 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.502294 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.502759 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.002725238 +0000 UTC m=+138.563206329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.503114 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.503523 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.003511093 +0000 UTC m=+138.563992154 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.519370 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.539758 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.577861 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gznwz\" (UniqueName: \"kubernetes.io/projected/d2cc0b59-1828-4d02-b898-a260712479fa-kube-api-access-gznwz\") pod \"console-operator-58897d9998-dfqm8\" (UID: \"d2cc0b59-1828-4d02-b898-a260712479fa\") " pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.580402 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.604688 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.605380 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.105299068 +0000 UTC m=+138.665780169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.605904 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.606541 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.106511807 +0000 UTC m=+138.666992898 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.617450 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99jm7\" (UniqueName: \"kubernetes.io/projected/1dced0a5-18f4-4cf6-b497-1d8dad926744-kube-api-access-99jm7\") pod \"controller-manager-879f6c89f-4bx5d\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.640289 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lgj6\" (UniqueName: \"kubernetes.io/projected/08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6-kube-api-access-9lgj6\") pod \"apiserver-7bbb656c7d-9vrft\" (UID: \"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.649576 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.664936 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skr27\" (UniqueName: \"kubernetes.io/projected/aa8c9620-293f-4658-9c69-0b843396613b-kube-api-access-skr27\") pod \"openshift-config-operator-7777fb866f-8qz75\" (UID: \"aa8c9620-293f-4658-9c69-0b843396613b\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.677401 4970 request.go:700] Waited for 1.913120329s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/serviceaccounts/oauth-openshift/token Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.678742 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzbms\" (UniqueName: \"kubernetes.io/projected/a96798ca-3ac0-4957-887c-763502663335-kube-api-access-tzbms\") pod \"downloads-7954f5f757-frbcj\" (UID: \"a96798ca-3ac0-4957-887c-763502663335\") " pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.698149 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lmf2\" (UniqueName: \"kubernetes.io/projected/71161079-7313-4f68-b716-a4650e0af898-kube-api-access-5lmf2\") pod \"oauth-openshift-558db77b4-8txkd\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.707660 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.707877 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.207848448 +0000 UTC m=+138.768329499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.708369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.708794 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.208781737 +0000 UTC m=+138.769262858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.711621 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.717271 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9j2\" (UniqueName: \"kubernetes.io/projected/6049a17c-c44f-4d71-a42b-7be7a525fa90-kube-api-access-xm9j2\") pod \"authentication-operator-69f744f599-74sms\" (UID: \"6049a17c-c44f-4d71-a42b-7be7a525fa90\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.733555 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmmx6\" (UniqueName: \"kubernetes.io/projected/2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca-kube-api-access-fmmx6\") pod \"apiserver-76f77b778f-g2x6q\" (UID: \"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca\") " pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.773571 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.794772 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcrv2\" (UniqueName: \"kubernetes.io/projected/a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8-kube-api-access-jcrv2\") pod \"machine-api-operator-5694c8668f-hhcfv\" (UID: \"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.802955 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ds95d\" (UniqueName: \"kubernetes.io/projected/932056a2-9503-407e-9502-6c90b5ebe586-kube-api-access-ds95d\") pod \"machine-approver-56656f9798-tgr6h\" (UID: \"932056a2-9503-407e-9502-6c90b5ebe586\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.803374 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.809474 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wbb\" (UniqueName: \"kubernetes.io/projected/95531044-b9d4-4231-ac7c-be2850f2cbfd-kube-api-access-m8wbb\") pod \"route-controller-manager-6576b87f9c-qgssk\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.810702 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.811497 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.311471701 +0000 UTC m=+138.871952792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.829110 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gx84\" (UniqueName: \"kubernetes.io/projected/c5397d37-40af-40bf-9bd4-b8a267c3c7b1-kube-api-access-2gx84\") pod \"openshift-apiserver-operator-796bbdcf4f-s58cx\" (UID: \"c5397d37-40af-40bf-9bd4-b8a267c3c7b1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.838428 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hvb\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-kube-api-access-68hvb\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.859286 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/30c5a311-d7ec-49f9-9de2-535aed0af228-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.871081 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2zq\" (UniqueName: \"kubernetes.io/projected/85e77654-6af7-4c35-bf92-f7c67d49dd2d-kube-api-access-sg2zq\") pod \"dns-operator-744455d44c-qnl9t\" (UID: \"85e77654-6af7-4c35-bf92-f7c67d49dd2d\") " pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.885078 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.896199 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt4nm\" (UniqueName: \"kubernetes.io/projected/b5af52df-627e-4b20-a29e-3f66fbafe5b6-kube-api-access-mt4nm\") pod \"packageserver-d55dfcdfc-d5b99\" (UID: \"b5af52df-627e-4b20-a29e-3f66fbafe5b6\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.899018 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.912031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:45 crc kubenswrapper[4970]: E1209 12:08:45.912561 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.412545643 +0000 UTC m=+138.973026694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.914353 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5swxf\" (UniqueName: \"kubernetes.io/projected/96aedd83-c6c1-4b08-8d47-43cd63aaae68-kube-api-access-5swxf\") pod \"collect-profiles-29421360-vvgqb\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.919628 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.938032 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.938321 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4b7v\" (UniqueName: \"kubernetes.io/projected/30c5a311-d7ec-49f9-9de2-535aed0af228-kube-api-access-x4b7v\") pod \"ingress-operator-5b745b69d9-gdjbp\" (UID: \"30c5a311-d7ec-49f9-9de2-535aed0af228\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.948400 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.955705 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csc7n\" (UniqueName: \"kubernetes.io/projected/2becf945-4a01-49fd-a2c5-788632898a32-kube-api-access-csc7n\") pod \"marketplace-operator-79b997595-xgfp5\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.966575 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.973558 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cskbh\" (UniqueName: \"kubernetes.io/projected/c8828226-d424-4df9-b753-d3af05d4f07e-kube-api-access-cskbh\") pod \"migrator-59844c95c7-wzw99\" (UID: \"c8828226-d424-4df9-b753-d3af05d4f07e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.975440 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.990620 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" Dec 09 12:08:45 crc kubenswrapper[4970]: I1209 12:08:45.997321 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwxrq\" (UniqueName: \"kubernetes.io/projected/f7b83f2f-f485-4b2f-9fc8-87581c835972-kube-api-access-jwxrq\") pod \"cluster-samples-operator-665b6dd947-r6fwb\" (UID: \"f7b83f2f-f485-4b2f-9fc8-87581c835972\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:45.999995 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-dfqm8"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.004609 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.006032 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.010547 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8qz75"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.011742 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg864\" (UniqueName: \"kubernetes.io/projected/efd00913-5fd9-4268-b753-a449b7d25a16-kube-api-access-xg864\") pod \"etcd-operator-b45778765-n9dr5\" (UID: \"efd00913-5fd9-4268-b753-a449b7d25a16\") " pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.013211 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.013553 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.513529103 +0000 UTC m=+139.074010154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.016144 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.016753 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.516738065 +0000 UTC m=+139.077219116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.019916 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.035448 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tpzp\" (UniqueName: \"kubernetes.io/projected/5503d403-7960-40ee-86be-586e5c03a682-kube-api-access-7tpzp\") pod \"machine-config-operator-74547568cd-l27ln\" (UID: \"5503d403-7960-40ee-86be-586e5c03a682\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.041630 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.062868 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8wl\" (UniqueName: \"kubernetes.io/projected/8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7-kube-api-access-zb8wl\") pod \"catalog-operator-68c6474976-jjr45\" (UID: \"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.071463 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-frbcj"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.076546 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85dwm\" (UniqueName: \"kubernetes.io/projected/cd84ab53-f698-428b-b831-cf6605d01965-kube-api-access-85dwm\") pod \"service-ca-operator-777779d784-87shv\" (UID: \"cd84ab53-f698-428b-b831-cf6605d01965\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.092943 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm25f\" (UniqueName: \"kubernetes.io/projected/8709261f-420d-4fee-908c-a7e1074959cb-kube-api-access-fm25f\") pod \"control-plane-machine-set-operator-78cbb6b69f-8lbhv\" (UID: \"8709261f-420d-4fee-908c-a7e1074959cb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:46 crc kubenswrapper[4970]: W1209 12:08:46.109873 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda96798ca_3ac0_4957_887c_763502663335.slice/crio-18ec60dce39ffa048610a96c27fa17b2a2d316d3e67c9d138870a4aa9f06d078 WatchSource:0}: Error finding container 18ec60dce39ffa048610a96c27fa17b2a2d316d3e67c9d138870a4aa9f06d078: Status 404 returned error can't find the container with id 18ec60dce39ffa048610a96c27fa17b2a2d316d3e67c9d138870a4aa9f06d078 Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.112696 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.117302 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fcbeabb7-793d-476c-98f3-e559f7342f1f-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-25m9v\" (UID: \"fcbeabb7-793d-476c-98f3-e559f7342f1f\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.126328 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.127001 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.62698018 +0000 UTC m=+139.187461231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.128926 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.138006 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.139518 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5156ea95-86b1-44f3-979a-87fd807945c7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jl2n2\" (UID: \"5156ea95-86b1-44f3-979a-87fd807945c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.145060 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.155922 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgpx6\" (UniqueName: \"kubernetes.io/projected/94df2ee0-4918-4bb9-b11f-4da797c1376f-kube-api-access-kgpx6\") pod \"kube-storage-version-migrator-operator-b67b599dd-6g6xm\" (UID: \"94df2ee0-4918-4bb9-b11f-4da797c1376f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.160370 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.185875 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfrjg\" (UniqueName: \"kubernetes.io/projected/8225c927-d82e-41f7-a335-f65c50f6f4ce-kube-api-access-sfrjg\") pod \"multus-admission-controller-857f4d67dd-fctsp\" (UID: \"8225c927-d82e-41f7-a335-f65c50f6f4ce\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.196462 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt5kb\" (UniqueName: \"kubernetes.io/projected/2537d42d-31de-48b8-ae7a-afaba0d36376-kube-api-access-dt5kb\") pod \"console-f9d7485db-wqfw9\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.206544 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.216184 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9a65f95f-2f95-4750-a78a-08354df01f6d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.227406 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.228022 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.228408 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.728391763 +0000 UTC m=+139.288872814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.232465 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdkc\" (UniqueName: \"kubernetes.io/projected/9a65f95f-2f95-4750-a78a-08354df01f6d-kube-api-access-6gdkc\") pod \"cluster-image-registry-operator-dc59b4c8b-84hbp\" (UID: \"9a65f95f-2f95-4750-a78a-08354df01f6d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.234905 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.243228 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.256609 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgjlq\" (UniqueName: \"kubernetes.io/projected/0f48b902-332e-4fa2-9724-a947a6c8ad3a-kube-api-access-pgjlq\") pod \"openshift-controller-manager-operator-756b6f6bc6-9w89v\" (UID: \"0f48b902-332e-4fa2-9724-a947a6c8ad3a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.275929 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42q7s\" (UniqueName: \"kubernetes.io/projected/de05fec3-6b27-4c6f-9d72-ed1a35954ba1-kube-api-access-42q7s\") pod \"package-server-manager-789f6589d5-qmvms\" (UID: \"de05fec3-6b27-4c6f-9d72-ed1a35954ba1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.291120 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.296512 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.302051 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rzx2\" (UniqueName: \"kubernetes.io/projected/c115cc0e-4491-4b7b-83f3-349433ce3b08-kube-api-access-4rzx2\") pod \"router-default-5444994796-zwlc4\" (UID: \"c115cc0e-4491-4b7b-83f3-349433ce3b08\") " pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.306325 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.329734 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330012 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vch65\" (UniqueName: \"kubernetes.io/projected/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-kube-api-access-vch65\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330062 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89567\" (UniqueName: \"kubernetes.io/projected/a31fc8e6-139a-4961-afc3-a42e999cfc32-kube-api-access-89567\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330082 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crf27\" (UniqueName: \"kubernetes.io/projected/10beda09-a5fb-4526-9609-97ecb84ba8b9-kube-api-access-crf27\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330154 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86230d76-ca5d-4040-9947-8c87ba16c8b0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330182 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a31fc8e6-139a-4961-afc3-a42e999cfc32-signing-key\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330218 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10beda09-a5fb-4526-9609-97ecb84ba8b9-srv-cert\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330237 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5152d972-1997-46ba-9a0e-905c547895ff-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330397 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-node-bootstrap-token\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330464 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5152d972-1997-46ba-9a0e-905c547895ff-config\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330512 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10beda09-a5fb-4526-9609-97ecb84ba8b9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330568 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86230d76-ca5d-4040-9947-8c87ba16c8b0-proxy-tls\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330586 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5152d972-1997-46ba-9a0e-905c547895ff-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330606 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55677\" (UniqueName: \"kubernetes.io/projected/86230d76-ca5d-4040-9947-8c87ba16c8b0-kube-api-access-55677\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330740 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-certs\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.330816 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a31fc8e6-139a-4961-afc3-a42e999cfc32-signing-cabundle\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.331543 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.831526461 +0000 UTC m=+139.392007512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.354698 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.363189 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.413042 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bx5d"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.429169 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g2x6q"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.431997 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a31fc8e6-139a-4961-afc3-a42e999cfc32-signing-key\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.432107 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10beda09-a5fb-4526-9609-97ecb84ba8b9-srv-cert\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.432150 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5152d972-1997-46ba-9a0e-905c547895ff-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.432191 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-plugins-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.432226 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-node-bootstrap-token\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.432647 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67faef84-d54f-4d10-8f6e-eb0702f53a25-config-volume\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433059 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5152d972-1997-46ba-9a0e-905c547895ff-config\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433131 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10beda09-a5fb-4526-9609-97ecb84ba8b9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433155 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86230d76-ca5d-4040-9947-8c87ba16c8b0-proxy-tls\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433176 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-mountpoint-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433226 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5152d972-1997-46ba-9a0e-905c547895ff-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433334 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55677\" (UniqueName: \"kubernetes.io/projected/86230d76-ca5d-4040-9947-8c87ba16c8b0-kube-api-access-55677\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433407 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-certs\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433454 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67faef84-d54f-4d10-8f6e-eb0702f53a25-metrics-tls\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433538 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-registration-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433620 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v22c\" (UniqueName: \"kubernetes.io/projected/25e268c5-a128-49ae-b42d-931529d85cc6-kube-api-access-4v22c\") pod \"ingress-canary-9gq7t\" (UID: \"25e268c5-a128-49ae-b42d-931529d85cc6\") " pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433675 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-socket-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433693 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4t7m\" (UniqueName: \"kubernetes.io/projected/a71edd8a-18a6-4d93-a967-e8d0858a6220-kube-api-access-k4t7m\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a31fc8e6-139a-4961-afc3-a42e999cfc32-signing-cabundle\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433774 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mn6\" (UniqueName: \"kubernetes.io/projected/67faef84-d54f-4d10-8f6e-eb0702f53a25-kube-api-access-w4mn6\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433801 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433919 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-csi-data-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vch65\" (UniqueName: \"kubernetes.io/projected/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-kube-api-access-vch65\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433971 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89567\" (UniqueName: \"kubernetes.io/projected/a31fc8e6-139a-4961-afc3-a42e999cfc32-kube-api-access-89567\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.433987 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crf27\" (UniqueName: \"kubernetes.io/projected/10beda09-a5fb-4526-9609-97ecb84ba8b9-kube-api-access-crf27\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.434071 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25e268c5-a128-49ae-b42d-931529d85cc6-cert\") pod \"ingress-canary-9gq7t\" (UID: \"25e268c5-a128-49ae-b42d-931529d85cc6\") " pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.434147 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86230d76-ca5d-4040-9947-8c87ba16c8b0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.435094 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/a31fc8e6-139a-4961-afc3-a42e999cfc32-signing-key\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.436442 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-node-bootstrap-token\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.438012 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:46.937998565 +0000 UTC m=+139.498479616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.438323 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/86230d76-ca5d-4040-9947-8c87ba16c8b0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.439848 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/a31fc8e6-139a-4961-afc3-a42e999cfc32-signing-cabundle\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.440303 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5152d972-1997-46ba-9a0e-905c547895ff-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.440558 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/86230d76-ca5d-4040-9947-8c87ba16c8b0-proxy-tls\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.440578 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/10beda09-a5fb-4526-9609-97ecb84ba8b9-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.440803 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-certs\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.441227 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5152d972-1997-46ba-9a0e-905c547895ff-config\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.442889 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/10beda09-a5fb-4526-9609-97ecb84ba8b9-srv-cert\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.446644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" event={"ID":"932056a2-9503-407e-9502-6c90b5ebe586","Type":"ContainerStarted","Data":"4246170d9249092dcba1f162cdf1d4395b7e2b29a5c9d37b9c9c75804a3fe4ad"} Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.447376 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" event={"ID":"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6","Type":"ContainerStarted","Data":"6e0351e0c0cd0416f164dbbd0423a1a112c3ad6197344160308621e63426e427"} Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.452101 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.454011 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" event={"ID":"aa8c9620-293f-4658-9c69-0b843396613b","Type":"ContainerStarted","Data":"657296c4026c3eb98f709f68d30925b69f2a9728d582ef438c51f37003b89e5c"} Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.457550 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-frbcj" event={"ID":"a96798ca-3ac0-4957-887c-763502663335","Type":"ContainerStarted","Data":"18ec60dce39ffa048610a96c27fa17b2a2d316d3e67c9d138870a4aa9f06d078"} Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.460305 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" event={"ID":"d2cc0b59-1828-4d02-b898-a260712479fa","Type":"ContainerStarted","Data":"1b9748b92f75caf7f1b14759ddf884c4bd8c0474ff46ad05f688eb7ee51b82f5"} Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.465041 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.466700 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.474801 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5152d972-1997-46ba-9a0e-905c547895ff-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kxbq2\" (UID: \"5152d972-1997-46ba-9a0e-905c547895ff\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.497662 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55677\" (UniqueName: \"kubernetes.io/projected/86230d76-ca5d-4040-9947-8c87ba16c8b0-kube-api-access-55677\") pod \"machine-config-controller-84d6567774-qmwx4\" (UID: \"86230d76-ca5d-4040-9947-8c87ba16c8b0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.498959 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.511482 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xgfp5"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.515821 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89567\" (UniqueName: \"kubernetes.io/projected/a31fc8e6-139a-4961-afc3-a42e999cfc32-kube-api-access-89567\") pod \"service-ca-9c57cc56f-cjsvb\" (UID: \"a31fc8e6-139a-4961-afc3-a42e999cfc32\") " pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.522799 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.534706 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.534867 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.034840202 +0000 UTC m=+139.595321243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.534926 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-csi-data-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.534978 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25e268c5-a128-49ae-b42d-931529d85cc6-cert\") pod \"ingress-canary-9gq7t\" (UID: \"25e268c5-a128-49ae-b42d-931529d85cc6\") " pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535014 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-plugins-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535043 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67faef84-d54f-4d10-8f6e-eb0702f53a25-config-volume\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535069 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-mountpoint-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535092 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67faef84-d54f-4d10-8f6e-eb0702f53a25-metrics-tls\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535114 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-registration-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v22c\" (UniqueName: \"kubernetes.io/projected/25e268c5-a128-49ae-b42d-931529d85cc6-kube-api-access-4v22c\") pod \"ingress-canary-9gq7t\" (UID: \"25e268c5-a128-49ae-b42d-931529d85cc6\") " pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535152 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-socket-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535168 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4t7m\" (UniqueName: \"kubernetes.io/projected/a71edd8a-18a6-4d93-a967-e8d0858a6220-kube-api-access-k4t7m\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535186 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mn6\" (UniqueName: \"kubernetes.io/projected/67faef84-d54f-4d10-8f6e-eb0702f53a25-kube-api-access-w4mn6\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535208 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.535497 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.035486533 +0000 UTC m=+139.595967584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535649 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-registration-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535750 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-mountpoint-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535961 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-socket-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.535978 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-plugins-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.536069 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67faef84-d54f-4d10-8f6e-eb0702f53a25-config-volume\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.537668 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crf27\" (UniqueName: \"kubernetes.io/projected/10beda09-a5fb-4526-9609-97ecb84ba8b9-kube-api-access-crf27\") pod \"olm-operator-6b444d44fb-2d6zv\" (UID: \"10beda09-a5fb-4526-9609-97ecb84ba8b9\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.538456 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.542392 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67faef84-d54f-4d10-8f6e-eb0702f53a25-metrics-tls\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.543005 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/25e268c5-a128-49ae-b42d-931529d85cc6-cert\") pod \"ingress-canary-9gq7t\" (UID: \"25e268c5-a128-49ae-b42d-931529d85cc6\") " pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.543077 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhcfv"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.543146 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a71edd8a-18a6-4d93-a967-e8d0858a6220-csi-data-dir\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.549419 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8txkd"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.554430 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.555966 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.557859 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vch65\" (UniqueName: \"kubernetes.io/projected/9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d-kube-api-access-vch65\") pod \"machine-config-server-62724\" (UID: \"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d\") " pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.571098 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-62724" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.623393 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.630212 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.630584 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.632184 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-74sms"] Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.633695 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.635813 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.636070 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.136035758 +0000 UTC m=+139.696516809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.636203 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.636586 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.136574416 +0000 UTC m=+139.697055537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.647838 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.737844 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.738131 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.238100762 +0000 UTC m=+139.798581823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.738303 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.738646 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.238635239 +0000 UTC m=+139.799116350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.812952 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.839138 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.839330 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.339305228 +0000 UTC m=+139.899786279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.839500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.839889 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.339873737 +0000 UTC m=+139.900354858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.940574 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.940757 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.440732562 +0000 UTC m=+140.001213613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:46 crc kubenswrapper[4970]: I1209 12:08:46.940850 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:46 crc kubenswrapper[4970]: E1209 12:08:46.941219 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.441209207 +0000 UTC m=+140.001690258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.042663 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.042832 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.542811716 +0000 UTC m=+140.103292767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.043317 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.043628 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.543615952 +0000 UTC m=+140.104097003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.144860 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.145083 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.645057156 +0000 UTC m=+140.205538247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.246358 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.246875 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.74682679 +0000 UTC m=+140.307307871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.347723 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.348023 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.847993205 +0000 UTC m=+140.408474276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.348162 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.348488 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.848474501 +0000 UTC m=+140.408955552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.449056 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.449216 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.949183411 +0000 UTC m=+140.509664492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.449566 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.449932 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:47.949916245 +0000 UTC m=+140.510397336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.550692 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.550886 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.050861483 +0000 UTC m=+140.611342534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.551510 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.551909 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.051897896 +0000 UTC m=+140.612378957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.652564 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.652787 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.152756981 +0000 UTC m=+140.713238042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.652887 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.653348 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.153328989 +0000 UTC m=+140.713810080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.669106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v22c\" (UniqueName: \"kubernetes.io/projected/25e268c5-a128-49ae-b42d-931529d85cc6-kube-api-access-4v22c\") pod \"ingress-canary-9gq7t\" (UID: \"25e268c5-a128-49ae-b42d-931529d85cc6\") " pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.673037 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4t7m\" (UniqueName: \"kubernetes.io/projected/a71edd8a-18a6-4d93-a967-e8d0858a6220-kube-api-access-k4t7m\") pod \"csi-hostpathplugin-rm2zp\" (UID: \"a71edd8a-18a6-4d93-a967-e8d0858a6220\") " pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.674638 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mn6\" (UniqueName: \"kubernetes.io/projected/67faef84-d54f-4d10-8f6e-eb0702f53a25-kube-api-access-w4mn6\") pod \"dns-default-xnzx7\" (UID: \"67faef84-d54f-4d10-8f6e-eb0702f53a25\") " pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:47 crc kubenswrapper[4970]: W1209 12:08:47.689547 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a0f2c3d_f2dc_4414_8b20_da75dd6e50ca.slice/crio-9d874f96caa64884dcb69fb94a6f7d68568ca17e97ecda380a5f0dc691ba6f47 WatchSource:0}: Error finding container 9d874f96caa64884dcb69fb94a6f7d68568ca17e97ecda380a5f0dc691ba6f47: Status 404 returned error can't find the container with id 9d874f96caa64884dcb69fb94a6f7d68568ca17e97ecda380a5f0dc691ba6f47 Dec 09 12:08:47 crc kubenswrapper[4970]: W1209 12:08:47.697458 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8828226_d424_4df9_b753_d3af05d4f07e.slice/crio-baa7d45ff5a4f67bf6ead4a0f132a9d632deaf37ec56966bb1336a852d6832a9 WatchSource:0}: Error finding container baa7d45ff5a4f67bf6ead4a0f132a9d632deaf37ec56966bb1336a852d6832a9: Status 404 returned error can't find the container with id baa7d45ff5a4f67bf6ead4a0f132a9d632deaf37ec56966bb1336a852d6832a9 Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.754063 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.754958 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.254938519 +0000 UTC m=+140.815419570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.855870 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qnl9t"] Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.856114 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.856548 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.356534558 +0000 UTC m=+140.917015609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.870181 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.880892 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-9gq7t" Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.901618 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" Dec 09 12:08:47 crc kubenswrapper[4970]: W1209 12:08:47.936818 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85e77654_6af7_4c35_bf92_f7c67d49dd2d.slice/crio-928ffc4187ac0cc9fa5b9cd49ca6f0748e41bfa4ca5c8f1d16a5960d166016f2 WatchSource:0}: Error finding container 928ffc4187ac0cc9fa5b9cd49ca6f0748e41bfa4ca5c8f1d16a5960d166016f2: Status 404 returned error can't find the container with id 928ffc4187ac0cc9fa5b9cd49ca6f0748e41bfa4ca5c8f1d16a5960d166016f2 Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.943662 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99"] Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.956842 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:47 crc kubenswrapper[4970]: E1209 12:08:47.957147 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.457131975 +0000 UTC m=+141.017613026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:47 crc kubenswrapper[4970]: I1209 12:08:47.991074 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2"] Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.057842 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.058146 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.558134164 +0000 UTC m=+141.118615215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.090636 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv"] Dec 09 12:08:48 crc kubenswrapper[4970]: W1209 12:08:48.090637 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5af52df_627e_4b20_a29e_3f66fbafe5b6.slice/crio-eaf722c89a707a110fe71a726767f56cb6abfe282cf905eae66afc0f9c966ed9 WatchSource:0}: Error finding container eaf722c89a707a110fe71a726767f56cb6abfe282cf905eae66afc0f9c966ed9: Status 404 returned error can't find the container with id eaf722c89a707a110fe71a726767f56cb6abfe282cf905eae66afc0f9c966ed9 Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.158572 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.158762 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.658739741 +0000 UTC m=+141.219220782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.158814 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.159154 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.659147244 +0000 UTC m=+141.219628295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.261079 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.263692 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.763662036 +0000 UTC m=+141.324143087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.367379 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.368060 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.868021833 +0000 UTC m=+141.428502884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.372840 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln"] Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.391819 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fctsp"] Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.468487 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.468641 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.968596219 +0000 UTC m=+141.529077270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.468817 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.469373 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:48.969354883 +0000 UTC m=+141.529835934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.475919 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" event={"ID":"932056a2-9503-407e-9502-6c90b5ebe586","Type":"ContainerStarted","Data":"861b1f7f671b6bbf3ca0bd2455984bcc90776d632692ce3b13d19aa8e76c2585"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.571732 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.571938 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.071902643 +0000 UTC m=+141.632383694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.572456 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.572831 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.072816222 +0000 UTC m=+141.633297273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.572936 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" event={"ID":"1dced0a5-18f4-4cf6-b497-1d8dad926744","Type":"ContainerStarted","Data":"0ff82519db8ea4f9009f1d602f2388d5f5bfbb4357608803f5c661fde8dde93b"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.572988 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" event={"ID":"1dced0a5-18f4-4cf6-b497-1d8dad926744","Type":"ContainerStarted","Data":"51df96a2af44185c52e8197f3af37d52bf62ee4df4dfcaec035f6235c26bee17"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.573138 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.581194 4970 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-4bx5d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.581272 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" podUID="1dced0a5-18f4-4cf6-b497-1d8dad926744" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.590609 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" event={"ID":"6049a17c-c44f-4d71-a42b-7be7a525fa90","Type":"ContainerStarted","Data":"16f04c108e025e261c6bea879e503baa1b12f1b3a10d99fc5396eb5e93a709bd"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.594062 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" event={"ID":"71161079-7313-4f68-b716-a4650e0af898","Type":"ContainerStarted","Data":"ee962df48d475c43f3cec9366a8a618f2b52ba156c229ebe6465297ac37864d1"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.605193 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" event={"ID":"f7b83f2f-f485-4b2f-9fc8-87581c835972","Type":"ContainerStarted","Data":"f3dde33bc8f2b963dbcc25a4b406c6c42492a8cf4b93373da8b0bc2d4dbce75b"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.607139 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" event={"ID":"c5397d37-40af-40bf-9bd4-b8a267c3c7b1","Type":"ContainerStarted","Data":"c2a710f2077261ca8ffcba2322d8fe1def55e8d02ecdaebb4f24a2bff5c342a7"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.608776 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zwlc4" event={"ID":"c115cc0e-4491-4b7b-83f3-349433ce3b08","Type":"ContainerStarted","Data":"4888f23c33438a42822e0b4e48bea8a2680f745e03bc9bbd3264f7963ac0fcb4"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.609763 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" event={"ID":"2becf945-4a01-49fd-a2c5-788632898a32","Type":"ContainerStarted","Data":"9650a394d3a425e2011c9b90eb40b29ee4d33f59ffc16b2d861b629343f43b7a"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.610782 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" event={"ID":"96aedd83-c6c1-4b08-8d47-43cd63aaae68","Type":"ContainerStarted","Data":"c040d809744b7f9c3234a3efcf961b4d89695083c88cb9d4b7b01c5085350dc4"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.611952 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" event={"ID":"d2cc0b59-1828-4d02-b898-a260712479fa","Type":"ContainerStarted","Data":"aed6429281ec7787ac9f0bdf229a22fcb8bd335817d28f9f5f9e42d57ecaf1b1"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.612687 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.615582 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" event={"ID":"5156ea95-86b1-44f3-979a-87fd807945c7","Type":"ContainerStarted","Data":"50884e549e28b878107f97dd52eda78cae62014ea0395f305004a0f357972df6"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.617436 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-62724" event={"ID":"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d","Type":"ContainerStarted","Data":"1f7586dafd6bb2f398c5b4b15b7adef4b497d8ced5911ea11df4893791c1c8b1"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.627727 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" event={"ID":"c8828226-d424-4df9-b753-d3af05d4f07e","Type":"ContainerStarted","Data":"baa7d45ff5a4f67bf6ead4a0f132a9d632deaf37ec56966bb1336a852d6832a9"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.630169 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-frbcj" event={"ID":"a96798ca-3ac0-4957-887c-763502663335","Type":"ContainerStarted","Data":"dec3e8c3e0a66dc699d904e8e458459ac69b9c3834d96bf6c471c2282121470a"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.630767 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.631743 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" event={"ID":"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca","Type":"ContainerStarted","Data":"9d874f96caa64884dcb69fb94a6f7d68568ca17e97ecda380a5f0dc691ba6f47"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.636239 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-frbcj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.636329 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-frbcj" podUID="a96798ca-3ac0-4957-887c-763502663335" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.639160 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" podStartSLOduration=121.639134926 podStartE2EDuration="2m1.639134926s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:48.598959425 +0000 UTC m=+141.159440476" watchObservedRunningTime="2025-12-09 12:08:48.639134926 +0000 UTC m=+141.199615977" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.640579 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" event={"ID":"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8","Type":"ContainerStarted","Data":"26628ca64ab58f2e647d900b4f2c8deab9fe11d329e7b0ecaa64af792ba8a65d"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.640792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" event={"ID":"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8","Type":"ContainerStarted","Data":"b85069a08cefbd5972382ee7aeec6873b76ecf8c4952c68e6cda290655e6348d"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.653742 4970 generic.go:334] "Generic (PLEG): container finished" podID="08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6" containerID="d565cb32b09bb290fcef26aa997b51db6cd920597fd62e86e3c57ce9ff4576d1" exitCode=0 Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.654041 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" event={"ID":"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6","Type":"ContainerDied","Data":"d565cb32b09bb290fcef26aa997b51db6cd920597fd62e86e3c57ce9ff4576d1"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.662028 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" podStartSLOduration=121.662002935 podStartE2EDuration="2m1.662002935s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:48.640595793 +0000 UTC m=+141.201076854" watchObservedRunningTime="2025-12-09 12:08:48.662002935 +0000 UTC m=+141.222483976" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.669023 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" event={"ID":"8709261f-420d-4fee-908c-a7e1074959cb","Type":"ContainerStarted","Data":"dfcaf533b2ed5b9d3b18b5c18bf1e9b6a675d66c45e34395004cc4174b44bbdf"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.671141 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" event={"ID":"b5af52df-627e-4b20-a29e-3f66fbafe5b6","Type":"ContainerStarted","Data":"eaf722c89a707a110fe71a726767f56cb6abfe282cf905eae66afc0f9c966ed9"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.674360 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.676754 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" event={"ID":"85e77654-6af7-4c35-bf92-f7c67d49dd2d","Type":"ContainerStarted","Data":"928ffc4187ac0cc9fa5b9cd49ca6f0748e41bfa4ca5c8f1d16a5960d166016f2"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.678690 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-frbcj" podStartSLOduration=121.678670806 podStartE2EDuration="2m1.678670806s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:48.659725632 +0000 UTC m=+141.220206683" watchObservedRunningTime="2025-12-09 12:08:48.678670806 +0000 UTC m=+141.239151857" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.679754 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.1797356 +0000 UTC m=+141.740216651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.690605 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" event={"ID":"95531044-b9d4-4231-ac7c-be2850f2cbfd","Type":"ContainerStarted","Data":"0e1506fdc57bec566ed47f3124d42ad997fc0adc331d43b82719ce89474cfb5b"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.690810 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.693719 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" event={"ID":"30c5a311-d7ec-49f9-9de2-535aed0af228","Type":"ContainerStarted","Data":"a98683252703cd388d3b14badb10f1031ad6886c0dd12d3f68585dda4fc21e03"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.697578 4970 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qgssk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.697617 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" podUID="95531044-b9d4-4231-ac7c-be2850f2cbfd" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.702400 4970 generic.go:334] "Generic (PLEG): container finished" podID="aa8c9620-293f-4658-9c69-0b843396613b" containerID="c60f1d0d317466d77576bb325e30baf68add908c686d73fdac3ad408aaf75c52" exitCode=0 Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.702444 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" event={"ID":"aa8c9620-293f-4658-9c69-0b843396613b","Type":"ContainerDied","Data":"c60f1d0d317466d77576bb325e30baf68add908c686d73fdac3ad408aaf75c52"} Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.720417 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" podStartSLOduration=120.720379966 podStartE2EDuration="2m0.720379966s" podCreationTimestamp="2025-12-09 12:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:48.712189225 +0000 UTC m=+141.272670286" watchObservedRunningTime="2025-12-09 12:08:48.720379966 +0000 UTC m=+141.280861007" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.798120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.799978 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.299964023 +0000 UTC m=+141.860445064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.827344 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-dfqm8" Dec 09 12:08:48 crc kubenswrapper[4970]: I1209 12:08:48.902923 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:48 crc kubenswrapper[4970]: E1209 12:08:48.903828 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.403812664 +0000 UTC m=+141.964293715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.006839 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.007172 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.507160469 +0000 UTC m=+142.067641520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.108979 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.109502 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.609481571 +0000 UTC m=+142.169962632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.109859 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.110290 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.610239065 +0000 UTC m=+142.170720116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.210920 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.211205 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.711151702 +0000 UTC m=+142.271632783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.211420 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.211741 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.71172514 +0000 UTC m=+142.272206191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.313307 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.314026 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.81398645 +0000 UTC m=+142.374467501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.395466 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.395526 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-87shv"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.413191 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.417338 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-n9dr5"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.418697 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.421120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.430973 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:49.930955789 +0000 UTC m=+142.491436840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.487011 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.493007 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.540751 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.541157 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.041138562 +0000 UTC m=+142.601619613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.614116 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.641813 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.642172 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.142162403 +0000 UTC m=+142.702643454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.642374 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cjsvb"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.660915 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.671916 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.682751 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-9gq7t"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.684062 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.686489 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-wqfw9"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.691929 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-xnzx7"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.743871 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.744231 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.244216576 +0000 UTC m=+142.804697627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.758560 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rm2zp"] Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.759059 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" event={"ID":"30c5a311-d7ec-49f9-9de2-535aed0af228","Type":"ContainerStarted","Data":"c7754ffcc17f88f49560e4a3f8fe5a0a23c7eb91cd62054a849f10735d39b4d8"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.769972 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" event={"ID":"cd84ab53-f698-428b-b831-cf6605d01965","Type":"ContainerStarted","Data":"779e403218a347b24704993280eedc40ceb996f0c9e7ecca7c09f783740e8a0b"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.786995 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" event={"ID":"f7b83f2f-f485-4b2f-9fc8-87581c835972","Type":"ContainerStarted","Data":"7e3943b1028d983734b3c06b4079426f50f8c833fe0eba18799dd39a7519abb6"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.798663 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" event={"ID":"9a65f95f-2f95-4750-a78a-08354df01f6d","Type":"ContainerStarted","Data":"e35234db9dead8f085f5daf68fb10b0068e9d4ac6871b73e2314c52138a31323"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.847483 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.847904 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.347888351 +0000 UTC m=+142.908369402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.889025 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" event={"ID":"fcbeabb7-793d-476c-98f3-e559f7342f1f","Type":"ContainerStarted","Data":"a7bb123e0ac4321051629ac18bcf6badc9f57fcb0ea1b52318cdd0d50116f07d"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.889060 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" event={"ID":"6049a17c-c44f-4d71-a42b-7be7a525fa90","Type":"ContainerStarted","Data":"26d0fdf60fd241e00cc38fe7d8d80a70c8cf4cb8c741fc118fd7637c3964b1e4"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.922861 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" event={"ID":"96aedd83-c6c1-4b08-8d47-43cd63aaae68","Type":"ContainerStarted","Data":"05f541831293b31c7d001d88165312eb259ea66b064296a1f5986692ed85fcba"} Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.948279 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:49 crc kubenswrapper[4970]: E1209 12:08:49.949426 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.449411688 +0000 UTC m=+143.009892739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.950933 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-74sms" podStartSLOduration=122.950918536 podStartE2EDuration="2m2.950918536s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:49.931377483 +0000 UTC m=+142.491858544" watchObservedRunningTime="2025-12-09 12:08:49.950918536 +0000 UTC m=+142.511399587" Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.951513 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" podStartSLOduration=122.951506715 podStartE2EDuration="2m2.951506715s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:49.948095526 +0000 UTC m=+142.508576577" watchObservedRunningTime="2025-12-09 12:08:49.951506715 +0000 UTC m=+142.511987776" Dec 09 12:08:49 crc kubenswrapper[4970]: I1209 12:08:49.991098 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" event={"ID":"c8828226-d424-4df9-b753-d3af05d4f07e","Type":"ContainerStarted","Data":"82518b79e5e8898f2ec2531db73f6c31e36874cad2162ff119f11a04d107ed43"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.014459 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" event={"ID":"94df2ee0-4918-4bb9-b11f-4da797c1376f","Type":"ContainerStarted","Data":"af74d0982a1993fbcc7fc82c413e02698bb8ee2586760617e0c2cfa470397890"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.050115 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.051086 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.551070509 +0000 UTC m=+143.111551560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.052118 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9gq7t" event={"ID":"25e268c5-a128-49ae-b42d-931529d85cc6","Type":"ContainerStarted","Data":"55e438e23d60cd16228839e8ec2ffef447769aba7116f3b78a3a20fd2a77bfd0"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.058864 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" event={"ID":"de05fec3-6b27-4c6f-9d72-ed1a35954ba1","Type":"ContainerStarted","Data":"ab9d60c49e3f8638e95cd61189b0e2a02ce2cd679cd01e96655f3c6254444e9e"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.063174 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" event={"ID":"5152d972-1997-46ba-9a0e-905c547895ff","Type":"ContainerStarted","Data":"217c951efcf62686472bd89123182e42c5b892c20784ba0759376c23a9cd7d90"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.077189 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" event={"ID":"efd00913-5fd9-4268-b753-a449b7d25a16","Type":"ContainerStarted","Data":"9a28fd054fd52e729bc36349a7da1f5e4afa2b2fe0c81a4008683b99c8f047cb"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.093959 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-62724" event={"ID":"9d08bcf7-f9b2-4ef1-b678-9e7d72ece89d","Type":"ContainerStarted","Data":"70f00b95c4583b7d12f0725aaa85565d4873e830172f139dcad76cb7ac9379f4"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.106876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" event={"ID":"c5397d37-40af-40bf-9bd4-b8a267c3c7b1","Type":"ContainerStarted","Data":"e730f64deca95671980094da82121866b99f60aa6cf403ffc86f91799b577520"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.109918 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" event={"ID":"2becf945-4a01-49fd-a2c5-788632898a32","Type":"ContainerStarted","Data":"bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.110846 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.122166 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" event={"ID":"a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8","Type":"ContainerStarted","Data":"ce8911781deefc15fa15662a02047535e7f67d62e4a0aef2997d4f57341d733e"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.124439 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-62724" podStartSLOduration=7.124417507 podStartE2EDuration="7.124417507s" podCreationTimestamp="2025-12-09 12:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:50.121557436 +0000 UTC m=+142.682038487" watchObservedRunningTime="2025-12-09 12:08:50.124417507 +0000 UTC m=+142.684898558" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.125635 4970 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-xgfp5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.125671 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" podUID="2becf945-4a01-49fd-a2c5-788632898a32" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.131165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" event={"ID":"5503d403-7960-40ee-86be-586e5c03a682","Type":"ContainerStarted","Data":"87036987c95b1e26631c102165eb0238665ff0429948b12a0dd0ecc927e43621"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.143273 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" event={"ID":"95531044-b9d4-4231-ac7c-be2850f2cbfd","Type":"ContainerStarted","Data":"b6532907579e2bad2fa0e66cec8c4a64aeca2c0a46c571583e23be3bbe4c0138"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.155741 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.156746 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.656730577 +0000 UTC m=+143.217211628 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.156958 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.167684 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" event={"ID":"8225c927-d82e-41f7-a335-f65c50f6f4ce","Type":"ContainerStarted","Data":"446883e8d2dda5242d2e2fd58cfcc1c5d3a637ccaeaa4d34a1c2099094ae0a2b"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.170272 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" event={"ID":"0f48b902-332e-4fa2-9724-a947a6c8ad3a","Type":"ContainerStarted","Data":"389894c948ca417662cb033d2144b77de138ae7e143d6ae93b941b96d279b2fb"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.176541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" event={"ID":"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7","Type":"ContainerStarted","Data":"f334c47acbce77e75e6623ae7b48ae31026c067204df0d874ccdc1c6e8ae0839"} Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.177360 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-frbcj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.177403 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-frbcj" podUID="a96798ca-3ac0-4957-887c-763502663335" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.185669 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.192592 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s58cx" podStartSLOduration=123.19257291 podStartE2EDuration="2m3.19257291s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:50.187437456 +0000 UTC m=+142.747918507" watchObservedRunningTime="2025-12-09 12:08:50.19257291 +0000 UTC m=+142.753053961" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.193867 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" podStartSLOduration=123.193860601 podStartE2EDuration="2m3.193860601s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:50.155522429 +0000 UTC m=+142.716003480" watchObservedRunningTime="2025-12-09 12:08:50.193860601 +0000 UTC m=+142.754341652" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.257942 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.271322 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.77130673 +0000 UTC m=+143.331787781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.275633 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhcfv" podStartSLOduration=123.27560840699999 podStartE2EDuration="2m3.275608407s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:50.264217414 +0000 UTC m=+142.824698465" watchObservedRunningTime="2025-12-09 12:08:50.275608407 +0000 UTC m=+142.836089458" Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.359764 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.360501 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.860469822 +0000 UTC m=+143.420950873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.461881 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.472220 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:50.972187424 +0000 UTC m=+143.532668475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.566059 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.566370 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.066347346 +0000 UTC m=+143.626828397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.566594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.566953 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.066937335 +0000 UTC m=+143.627418386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.673996 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.674106 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.17408347 +0000 UTC m=+143.734564521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.674176 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.674693 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.17468432 +0000 UTC m=+143.735165401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.782787 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.783089 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.283072385 +0000 UTC m=+143.843553436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.893091 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.893661 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.3936497 +0000 UTC m=+143.954130751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:50 crc kubenswrapper[4970]: I1209 12:08:50.994919 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:50 crc kubenswrapper[4970]: E1209 12:08:50.995783 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.495768226 +0000 UTC m=+144.056249277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.098121 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.098670 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.598646045 +0000 UTC m=+144.159127096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.111525 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.203078 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.204018 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.703996584 +0000 UTC m=+144.264477695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.204757 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" event={"ID":"c8828226-d424-4df9-b753-d3af05d4f07e","Type":"ContainerStarted","Data":"6c980436f97df98073b0f053ac01532585d0fc5cec7a427618bb1d8e405cdef7"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.219453 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" event={"ID":"94df2ee0-4918-4bb9-b11f-4da797c1376f","Type":"ContainerStarted","Data":"88c216d2dc91fa022180c537cb9e0ca75c95ab135d1558c5007b02519b93fe9a"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.238681 4970 generic.go:334] "Generic (PLEG): container finished" podID="2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca" containerID="6969506ebe07bce8d2b9f1a8d00f3e67073576ecd4efbcceaddbb32d3a9c95da" exitCode=0 Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.238773 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" event={"ID":"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca","Type":"ContainerDied","Data":"6969506ebe07bce8d2b9f1a8d00f3e67073576ecd4efbcceaddbb32d3a9c95da"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.252413 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wzw99" podStartSLOduration=124.252395597 podStartE2EDuration="2m4.252395597s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.247371027 +0000 UTC m=+143.807852088" watchObservedRunningTime="2025-12-09 12:08:51.252395597 +0000 UTC m=+143.812876648" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.255677 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" event={"ID":"fcbeabb7-793d-476c-98f3-e559f7342f1f","Type":"ContainerStarted","Data":"25e71f6e94cc6b231f8b11d02ae6999836d6eed7d16acc27568b29199daa957f"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.259323 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" event={"ID":"0f48b902-332e-4fa2-9724-a947a6c8ad3a","Type":"ContainerStarted","Data":"0e547407197cc1f29b1c9fd8eef1c11cda1688dd9a347d80204cd91316e8e2f6"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.283550 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" event={"ID":"f7b83f2f-f485-4b2f-9fc8-87581c835972","Type":"ContainerStarted","Data":"44e4db820b8f8abd06d8465fe6777f83bfec20a9cab6a7e964a818eeb0d0d2fc"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.286098 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" event={"ID":"08ef4ab6-bd0e-4f1d-a66e-69ed7c4f05c6","Type":"ContainerStarted","Data":"1237496df96156d0f85c06d9ebe7527c9668cb01802e5f610436b960cacf4f15"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.304999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.306183 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.806168981 +0000 UTC m=+144.366650032 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.321475 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-9gq7t" event={"ID":"25e268c5-a128-49ae-b42d-931529d85cc6","Type":"ContainerStarted","Data":"8b96466cda118878c6f7fed16ae71569260b86f055a20c9f64b14d152cc6d7d3"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.322430 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6g6xm" podStartSLOduration=124.322414319 podStartE2EDuration="2m4.322414319s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.321523631 +0000 UTC m=+143.882004682" watchObservedRunningTime="2025-12-09 12:08:51.322414319 +0000 UTC m=+143.882895370" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.328224 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" event={"ID":"8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7","Type":"ContainerStarted","Data":"4fb95009935606d75a54ddadcf3db2c20aa51a084da7ce9af562eb47d659bc47"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.329406 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.331203 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" event={"ID":"5503d403-7960-40ee-86be-586e5c03a682","Type":"ContainerStarted","Data":"725e784cccf87449cedde815dce04704d2f078fda7f7ec4ad8ffdb21f8cd72ff"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.340178 4970 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jjr45 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.340239 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" podUID="8f239ab2-506b-4621-a2a4-0f0bd4b4b6d7" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.353540 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wqfw9" event={"ID":"2537d42d-31de-48b8-ae7a-afaba0d36376","Type":"ContainerStarted","Data":"ff9de0745f4465774e25638339a118c3b4be7fce5491f46e415550104ae05898"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.353583 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wqfw9" event={"ID":"2537d42d-31de-48b8-ae7a-afaba0d36376","Type":"ContainerStarted","Data":"521c288039981de5150a4092d6996acd0c698172623d8166ede6d81b01923248"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.407602 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-zwlc4" event={"ID":"c115cc0e-4491-4b7b-83f3-349433ce3b08","Type":"ContainerStarted","Data":"74ca082d4700ffee5e31f5759d9f2644599c609c2d4407bda9b1b2691b0920b5"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.418751 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.419761 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:51.919736612 +0000 UTC m=+144.480217713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.439799 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" event={"ID":"b5af52df-627e-4b20-a29e-3f66fbafe5b6","Type":"ContainerStarted","Data":"c86851e1b46a1f62241433b3c80131fbd29a531abaa51bc6580ec9fe5621e9d3"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.440651 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.443699 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" event={"ID":"932056a2-9503-407e-9502-6c90b5ebe586","Type":"ContainerStarted","Data":"ee7e4bcecec4ddb9b6b523f99dde1930facaefcfc5c711a0eec60f668fc1e8c9"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.459360 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.462103 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:51 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:51 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:51 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.462153 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.466145 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" event={"ID":"8709261f-420d-4fee-908c-a7e1074959cb","Type":"ContainerStarted","Data":"8f24589b115b277533a06fa35ce345ed09e4fb2108afe044a4d76db5fde6406f"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.495700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" event={"ID":"71161079-7313-4f68-b716-a4650e0af898","Type":"ContainerStarted","Data":"5fa79c4a01534940544b556b9a0d80a6a7080db0df288c880bd28566b7df36af"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.495738 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.502259 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r6fwb" podStartSLOduration=124.502229482 podStartE2EDuration="2m4.502229482s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.440015398 +0000 UTC m=+144.000496449" watchObservedRunningTime="2025-12-09 12:08:51.502229482 +0000 UTC m=+144.062710533" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.518835 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" event={"ID":"85e77654-6af7-4c35-bf92-f7c67d49dd2d","Type":"ContainerStarted","Data":"c72bfd9732c07162c107ee4ee5366a23f6b09eb959711da1f02b18aae2636aab"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.521308 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.524217 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.024199952 +0000 UTC m=+144.584681073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.551708 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" event={"ID":"a71edd8a-18a6-4d93-a967-e8d0858a6220","Type":"ContainerStarted","Data":"59c82c9341cfccaff667953b0c98496a2d33426f1106e83973c9bd7cfa218d27"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.551856 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" podStartSLOduration=124.551815553 podStartE2EDuration="2m4.551815553s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.503887065 +0000 UTC m=+144.064368116" watchObservedRunningTime="2025-12-09 12:08:51.551815553 +0000 UTC m=+144.112296604" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.552579 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" podStartSLOduration=124.552574767 podStartE2EDuration="2m4.552574767s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.552071051 +0000 UTC m=+144.112552112" watchObservedRunningTime="2025-12-09 12:08:51.552574767 +0000 UTC m=+144.113055808" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.613347 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" event={"ID":"cd84ab53-f698-428b-b831-cf6605d01965","Type":"ContainerStarted","Data":"0c13ff97dff0e4dd1fa10b4f97330b4544ff072d4384f4b5edabaf076b006cb6"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.618653 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-9w89v" podStartSLOduration=124.618635102 podStartE2EDuration="2m4.618635102s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.616353619 +0000 UTC m=+144.176834690" watchObservedRunningTime="2025-12-09 12:08:51.618635102 +0000 UTC m=+144.179116153" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.623271 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.623416 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.123386223 +0000 UTC m=+144.683867274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.623736 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.625602 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.125586453 +0000 UTC m=+144.686067524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.657074 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" event={"ID":"30c5a311-d7ec-49f9-9de2-535aed0af228","Type":"ContainerStarted","Data":"6e2c44bd4e9c05fdcb3af86aac27a85fa97eec4ae38cafc7d1988d901eb3a021"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.677596 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" podStartSLOduration=124.677577631 podStartE2EDuration="2m4.677577631s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.675640079 +0000 UTC m=+144.236121130" watchObservedRunningTime="2025-12-09 12:08:51.677577631 +0000 UTC m=+144.238058682" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.699890 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" event={"ID":"9a65f95f-2f95-4750-a78a-08354df01f6d","Type":"ContainerStarted","Data":"5f0279c9cee236ef5d016c292babd0482dc0de39632ab8c3cc452b11a6f6c42b"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.726971 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.731373 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.231344255 +0000 UTC m=+144.791825296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.747862 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-wqfw9" podStartSLOduration=124.747842631 podStartE2EDuration="2m4.747842631s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.726216101 +0000 UTC m=+144.286697152" watchObservedRunningTime="2025-12-09 12:08:51.747842631 +0000 UTC m=+144.308323682" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.752680 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-25m9v" podStartSLOduration=124.752658595 podStartE2EDuration="2m4.752658595s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.749170843 +0000 UTC m=+144.309651894" watchObservedRunningTime="2025-12-09 12:08:51.752658595 +0000 UTC m=+144.313139646" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.759740 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" event={"ID":"aa8c9620-293f-4658-9c69-0b843396613b","Type":"ContainerStarted","Data":"94f8afa8cd4cb1e466ef905507b20aebc89fd78b84068ce2f19ab511242664da"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.762540 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.773445 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-9gq7t" podStartSLOduration=8.773431007 podStartE2EDuration="8.773431007s" podCreationTimestamp="2025-12-09 12:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.772636581 +0000 UTC m=+144.333117632" watchObservedRunningTime="2025-12-09 12:08:51.773431007 +0000 UTC m=+144.333912058" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.791328 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" event={"ID":"de05fec3-6b27-4c6f-9d72-ed1a35954ba1","Type":"ContainerStarted","Data":"85448fdf8b7b71462921b83382c60f28f406b51c7344fecde1d0a462a715f81b"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.792005 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.813515 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" event={"ID":"86230d76-ca5d-4040-9947-8c87ba16c8b0","Type":"ContainerStarted","Data":"a2f402c2e6f2ef25c3480f87f41f6b08961ed70de8f58f2654b19ea54a94ca62"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.813552 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" event={"ID":"86230d76-ca5d-4040-9947-8c87ba16c8b0","Type":"ContainerStarted","Data":"8ac3e5103ca4ca59f7043d915263bb25cf61673cdc068af12a795d5ae5afb2cf"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.816217 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" podStartSLOduration=124.81620803 podStartE2EDuration="2m4.81620803s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.814330051 +0000 UTC m=+144.374811102" watchObservedRunningTime="2025-12-09 12:08:51.81620803 +0000 UTC m=+144.376689081" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.833489 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xnzx7" event={"ID":"67faef84-d54f-4d10-8f6e-eb0702f53a25","Type":"ContainerStarted","Data":"8b548a0fa6ecfd7f32f21ae8145fcd1c490343abf6878aa6c3e501d90c65684c"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.838419 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.839963 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.839983 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.339972688 +0000 UTC m=+144.900453739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.867203 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-8lbhv" podStartSLOduration=124.867177275 podStartE2EDuration="2m4.867177275s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.86357407 +0000 UTC m=+144.424055121" watchObservedRunningTime="2025-12-09 12:08:51.867177275 +0000 UTC m=+144.427658326" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.882285 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" event={"ID":"a31fc8e6-139a-4961-afc3-a42e999cfc32","Type":"ContainerStarted","Data":"ceda8b23c6153e4eac8427599b9edc58d2f1c0b90ce3ee02086fc25885d4f6f5"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.882334 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" event={"ID":"a31fc8e6-139a-4961-afc3-a42e999cfc32","Type":"ContainerStarted","Data":"b6c1d0d40866a650cd58bfd2a668403442a2ee353a28f3562a584b7343a2943c"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.913180 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" event={"ID":"5156ea95-86b1-44f3-979a-87fd807945c7","Type":"ContainerStarted","Data":"cc00867ed19387c53544c4b2a94e9d5f9c0d27635118436faf6d39e5d3372354"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.922488 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" event={"ID":"10beda09-a5fb-4526-9609-97ecb84ba8b9","Type":"ContainerStarted","Data":"6eb92204ac217d9ca90f73d3de83d98dd2eb392365ccf9aae364039140673374"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.923404 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.931368 4970 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-2d6zv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.931416 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" podUID="10beda09-a5fb-4526-9609-97ecb84ba8b9" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.942809 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:51 crc kubenswrapper[4970]: E1209 12:08:51.944179 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.44416304 +0000 UTC m=+145.004644091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.952411 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-zwlc4" podStartSLOduration=124.952393822 podStartE2EDuration="2m4.952393822s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.905476516 +0000 UTC m=+144.465957567" watchObservedRunningTime="2025-12-09 12:08:51.952393822 +0000 UTC m=+144.512874873" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.969574 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" event={"ID":"8225c927-d82e-41f7-a335-f65c50f6f4ce","Type":"ContainerStarted","Data":"7b39f66f1eafb572bc0118a2383eb8f9df2303d8368510a41d3fdbfa2388e7bf"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.984601 4970 generic.go:334] "Generic (PLEG): container finished" podID="96aedd83-c6c1-4b08-8d47-43cd63aaae68" containerID="05f541831293b31c7d001d88165312eb259ea66b064296a1f5986692ed85fcba" exitCode=0 Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.985499 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" event={"ID":"96aedd83-c6c1-4b08-8d47-43cd63aaae68","Type":"ContainerDied","Data":"05f541831293b31c7d001d88165312eb259ea66b064296a1f5986692ed85fcba"} Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.990549 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" podStartSLOduration=124.990535108 podStartE2EDuration="2m4.990535108s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.990423454 +0000 UTC m=+144.550904535" watchObservedRunningTime="2025-12-09 12:08:51.990535108 +0000 UTC m=+144.551016159" Dec 09 12:08:51 crc kubenswrapper[4970]: I1209 12:08:51.991492 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-87shv" podStartSLOduration=123.991485968 podStartE2EDuration="2m3.991485968s" podCreationTimestamp="2025-12-09 12:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:51.954618513 +0000 UTC m=+144.515099564" watchObservedRunningTime="2025-12-09 12:08:51.991485968 +0000 UTC m=+144.551967019" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.032439 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.046092 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.049671 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-tgr6h" podStartSLOduration=125.049655133 podStartE2EDuration="2m5.049655133s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.049463757 +0000 UTC m=+144.609944808" watchObservedRunningTime="2025-12-09 12:08:52.049655133 +0000 UTC m=+144.610136184" Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.050735 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.550722657 +0000 UTC m=+145.111203708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.077709 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" podStartSLOduration=125.077694897 podStartE2EDuration="2m5.077694897s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.075584539 +0000 UTC m=+144.636065600" watchObservedRunningTime="2025-12-09 12:08:52.077694897 +0000 UTC m=+144.638175948" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.114875 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-84hbp" podStartSLOduration=125.114860172 podStartE2EDuration="2m5.114860172s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.114625784 +0000 UTC m=+144.675106835" watchObservedRunningTime="2025-12-09 12:08:52.114860172 +0000 UTC m=+144.675341223" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.141976 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gdjbp" podStartSLOduration=125.141960755 podStartE2EDuration="2m5.141960755s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.140129787 +0000 UTC m=+144.700610838" watchObservedRunningTime="2025-12-09 12:08:52.141960755 +0000 UTC m=+144.702441806" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.146806 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.148331 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.648312338 +0000 UTC m=+145.208793399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.250970 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.251469 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.751458446 +0000 UTC m=+145.311939497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.305941 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" podStartSLOduration=125.305922103 podStartE2EDuration="2m5.305922103s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.210297584 +0000 UTC m=+144.770778635" watchObservedRunningTime="2025-12-09 12:08:52.305922103 +0000 UTC m=+144.866403154" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.357632 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.358183 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.858168948 +0000 UTC m=+145.418649999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.382176 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cjsvb" podStartSLOduration=124.382158453 podStartE2EDuration="2m4.382158453s" podCreationTimestamp="2025-12-09 12:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.38142903 +0000 UTC m=+144.941910081" watchObservedRunningTime="2025-12-09 12:08:52.382158453 +0000 UTC m=+144.942639504" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.447790 4970 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d5b99 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.447870 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" podUID="b5af52df-627e-4b20-a29e-3f66fbafe5b6" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.21:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.458404 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:52 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:52 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:52 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.458462 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.459003 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.459363 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:52.959353064 +0000 UTC m=+145.519834105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.560073 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.560409 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.060389675 +0000 UTC m=+145.620870726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.570103 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" podStartSLOduration=125.570084404 podStartE2EDuration="2m5.570084404s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.479598059 +0000 UTC m=+145.040079110" watchObservedRunningTime="2025-12-09 12:08:52.570084404 +0000 UTC m=+145.130565455" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.637098 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" podStartSLOduration=125.63708168 podStartE2EDuration="2m5.63708168s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.569553457 +0000 UTC m=+145.130034508" watchObservedRunningTime="2025-12-09 12:08:52.63708168 +0000 UTC m=+145.197562731" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.638491 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jl2n2" podStartSLOduration=125.638483275 podStartE2EDuration="2m5.638483275s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.635657475 +0000 UTC m=+145.196138606" watchObservedRunningTime="2025-12-09 12:08:52.638483275 +0000 UTC m=+145.198964326" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.662349 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.662683 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.162669616 +0000 UTC m=+145.723150667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.673611 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" podStartSLOduration=125.673591764 podStartE2EDuration="2m5.673591764s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:52.673048137 +0000 UTC m=+145.233529188" watchObservedRunningTime="2025-12-09 12:08:52.673591764 +0000 UTC m=+145.234072815" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.763343 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.763736 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.263717937 +0000 UTC m=+145.824198988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.865203 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.865574 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.365560294 +0000 UTC m=+145.926041345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.966782 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:52 crc kubenswrapper[4970]: E1209 12:08:52.967059 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.467044219 +0000 UTC m=+146.027525270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.992112 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" event={"ID":"85e77654-6af7-4c35-bf92-f7c67d49dd2d","Type":"ContainerStarted","Data":"3dbd2457d65f64e196d51bc721a588bb48a1ff9023b9f62a3b96059c0a21bbf6"} Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.994311 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qmwx4" event={"ID":"86230d76-ca5d-4040-9947-8c87ba16c8b0","Type":"ContainerStarted","Data":"33a32760217fdfb4bcb7530150ce86759bec7d423dad0338b7ac17af3e6cf2e7"} Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.996632 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xnzx7" event={"ID":"67faef84-d54f-4d10-8f6e-eb0702f53a25","Type":"ContainerStarted","Data":"30c48e1033ad500c65d922aabb151cd435534e32114fb1c0d8282747bdca149e"} Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.996662 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-xnzx7" event={"ID":"67faef84-d54f-4d10-8f6e-eb0702f53a25","Type":"ContainerStarted","Data":"4bda5d15f93b65c55b30adc4ef1b68fe461f4bd44d1a2d0b91d38c7811b37b05"} Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.997059 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-xnzx7" Dec 09 12:08:52 crc kubenswrapper[4970]: I1209 12:08:52.998738 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fctsp" event={"ID":"8225c927-d82e-41f7-a335-f65c50f6f4ce","Type":"ContainerStarted","Data":"8c851d31fc0fcb861cdfcd18da04e1e9003bd5ca61a8b96eeac760c8a064114a"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.001026 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" event={"ID":"a71edd8a-18a6-4d93-a967-e8d0858a6220","Type":"ContainerStarted","Data":"d5a1364ad382d8d21edbbe61359bd854cad9d54e6ff28db20eca4251750c411b"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.002994 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" event={"ID":"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca","Type":"ContainerStarted","Data":"cdcc91a81436b871c61fb367ab71d8072750f49494d742ef3e089cbdc6aa8b52"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.003022 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" event={"ID":"2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca","Type":"ContainerStarted","Data":"3ae87271ea7422edb7efe96fc9563044ceff2247a96e07300ecff3a074b39906"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.004728 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" event={"ID":"de05fec3-6b27-4c6f-9d72-ed1a35954ba1","Type":"ContainerStarted","Data":"dc81ba1bfc441aa4d2229fb05f8465d2941cf1ff1a39e6cf78cc263f15b1396c"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.012601 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" event={"ID":"5152d972-1997-46ba-9a0e-905c547895ff","Type":"ContainerStarted","Data":"952d1d8d8c55849a97618a63813a82bec86e2522f695fee9660f595a3385c191"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.015653 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" event={"ID":"efd00913-5fd9-4268-b753-a449b7d25a16","Type":"ContainerStarted","Data":"375e9cd0dd4fe2214a89dac20ffc6823c062961a7510c76ef034d4735a1b7665"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.017855 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-l27ln" event={"ID":"5503d403-7960-40ee-86be-586e5c03a682","Type":"ContainerStarted","Data":"7f10ddb94800b6531e48f584d36bad2237559ba46e8266aec138d7213b4f9698"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.028451 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" event={"ID":"10beda09-a5fb-4526-9609-97ecb84ba8b9","Type":"ContainerStarted","Data":"5aa3097720fd1d2724de16faa705b9a32e273ba9c00af3aaa3ee66641b06c5e9"} Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.046554 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jjr45" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.046663 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8qz75" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.058530 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d5b99" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.069182 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.069762 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.569747933 +0000 UTC m=+146.130228984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.073688 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qnl9t" podStartSLOduration=126.073674149 podStartE2EDuration="2m6.073674149s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:53.022397904 +0000 UTC m=+145.582878955" watchObservedRunningTime="2025-12-09 12:08:53.073674149 +0000 UTC m=+145.634155200" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.076361 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" podStartSLOduration=126.076352174 podStartE2EDuration="2m6.076352174s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:53.073643348 +0000 UTC m=+145.634124399" watchObservedRunningTime="2025-12-09 12:08:53.076352174 +0000 UTC m=+145.636833235" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.091206 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2d6zv" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.130705 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kxbq2" podStartSLOduration=126.130688976 podStartE2EDuration="2m6.130688976s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:53.103594862 +0000 UTC m=+145.664075913" watchObservedRunningTime="2025-12-09 12:08:53.130688976 +0000 UTC m=+145.691170027" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.171640 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.174041 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.674024688 +0000 UTC m=+146.234505739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.187405 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-xnzx7" podStartSLOduration=10.187388334 podStartE2EDuration="10.187388334s" podCreationTimestamp="2025-12-09 12:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:53.131098279 +0000 UTC m=+145.691579330" watchObservedRunningTime="2025-12-09 12:08:53.187388334 +0000 UTC m=+145.747869385" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.245613 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-n9dr5" podStartSLOduration=126.24559856 podStartE2EDuration="2m6.24559856s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:53.189220702 +0000 UTC m=+145.749701763" watchObservedRunningTime="2025-12-09 12:08:53.24559856 +0000 UTC m=+145.806079611" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.283770 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.284266 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.784235761 +0000 UTC m=+146.344716812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.384718 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.385303 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.885287693 +0000 UTC m=+146.445768744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.460788 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:53 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:53 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:53 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.460846 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.487002 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.487471 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:53.9874486 +0000 UTC m=+146.547929661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.559806 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vwchd"] Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.560746 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.566615 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.585578 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwchd"] Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.587661 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.589039 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.089023928 +0000 UTC m=+146.649504979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.683860 4970 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.690005 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.690120 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-catalog-content\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.690175 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-utilities\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.690212 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cnm\" (UniqueName: \"kubernetes.io/projected/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-kube-api-access-m8cnm\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.690575 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.190559355 +0000 UTC m=+146.751040416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.727768 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.769364 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcxnq"] Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.769539 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96aedd83-c6c1-4b08-8d47-43cd63aaae68" containerName="collect-profiles" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.769550 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="96aedd83-c6c1-4b08-8d47-43cd63aaae68" containerName="collect-profiles" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.769628 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="96aedd83-c6c1-4b08-8d47-43cd63aaae68" containerName="collect-profiles" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.770209 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.773551 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.790694 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.790865 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-catalog-content\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.790901 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-utilities\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.790923 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8cnm\" (UniqueName: \"kubernetes.io/projected/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-kube-api-access-m8cnm\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.791217 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.291204003 +0000 UTC m=+146.851685054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.791863 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-catalog-content\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.792063 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-utilities\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.806685 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcxnq"] Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.840461 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8cnm\" (UniqueName: \"kubernetes.io/projected/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-kube-api-access-m8cnm\") pod \"community-operators-vwchd\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.894731 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5swxf\" (UniqueName: \"kubernetes.io/projected/96aedd83-c6c1-4b08-8d47-43cd63aaae68-kube-api-access-5swxf\") pod \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.894793 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume\") pod \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.894841 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96aedd83-c6c1-4b08-8d47-43cd63aaae68-secret-volume\") pod \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\" (UID: \"96aedd83-c6c1-4b08-8d47-43cd63aaae68\") " Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.895006 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.895073 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-catalog-content\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.895093 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vfdh\" (UniqueName: \"kubernetes.io/projected/343921a4-b33b-485d-b5ac-84d3d87b6ed7-kube-api-access-5vfdh\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.895116 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-utilities\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.895741 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.395726916 +0000 UTC m=+146.956207957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.896053 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume" (OuterVolumeSpecName: "config-volume") pod "96aedd83-c6c1-4b08-8d47-43cd63aaae68" (UID: "96aedd83-c6c1-4b08-8d47-43cd63aaae68"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.903936 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96aedd83-c6c1-4b08-8d47-43cd63aaae68-kube-api-access-5swxf" (OuterVolumeSpecName: "kube-api-access-5swxf") pod "96aedd83-c6c1-4b08-8d47-43cd63aaae68" (UID: "96aedd83-c6c1-4b08-8d47-43cd63aaae68"). InnerVolumeSpecName "kube-api-access-5swxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.917556 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96aedd83-c6c1-4b08-8d47-43cd63aaae68-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "96aedd83-c6c1-4b08-8d47-43cd63aaae68" (UID: "96aedd83-c6c1-4b08-8d47-43cd63aaae68"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.929479 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.946981 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dh2c2"] Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.947849 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.958390 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dh2c2"] Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.996724 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.996924 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-catalog-content\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.996945 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vfdh\" (UniqueName: \"kubernetes.io/projected/343921a4-b33b-485d-b5ac-84d3d87b6ed7-kube-api-access-5vfdh\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.996965 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-utilities\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.997033 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96aedd83-c6c1-4b08-8d47-43cd63aaae68-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.997045 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/96aedd83-c6c1-4b08-8d47-43cd63aaae68-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.997054 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5swxf\" (UniqueName: \"kubernetes.io/projected/96aedd83-c6c1-4b08-8d47-43cd63aaae68-kube-api-access-5swxf\") on node \"crc\" DevicePath \"\"" Dec 09 12:08:53 crc kubenswrapper[4970]: E1209 12:08:53.997265 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.497225112 +0000 UTC m=+147.057706163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.997726 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-utilities\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:53 crc kubenswrapper[4970]: I1209 12:08:53.997817 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-catalog-content\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.020990 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vfdh\" (UniqueName: \"kubernetes.io/projected/343921a4-b33b-485d-b5ac-84d3d87b6ed7-kube-api-access-5vfdh\") pod \"certified-operators-qcxnq\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.034289 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" event={"ID":"a71edd8a-18a6-4d93-a967-e8d0858a6220","Type":"ContainerStarted","Data":"7ff4a95444cc4b0714ba4bb127fa0af648b8747bd06ac33e32c52f1470d39e20"} Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.035947 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.041859 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb" event={"ID":"96aedd83-c6c1-4b08-8d47-43cd63aaae68","Type":"ContainerDied","Data":"c040d809744b7f9c3234a3efcf961b4d89695083c88cb9d4b7b01c5085350dc4"} Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.042109 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c040d809744b7f9c3234a3efcf961b4d89695083c88cb9d4b7b01c5085350dc4" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.090310 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.098594 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fq64\" (UniqueName: \"kubernetes.io/projected/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-kube-api-access-5fq64\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.098654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.098683 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-catalog-content\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.098964 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-utilities\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: E1209 12:08:54.098980 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.598966325 +0000 UTC m=+147.159447376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.149313 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7q4dt"] Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.150768 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.163880 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7q4dt"] Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.200021 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.200597 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-utilities\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.201105 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fq64\" (UniqueName: \"kubernetes.io/projected/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-kube-api-access-5fq64\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.201233 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-catalog-content\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: E1209 12:08:54.202605 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.702586338 +0000 UTC m=+147.263067389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.216943 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-utilities\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.228479 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-catalog-content\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.252075 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fq64\" (UniqueName: \"kubernetes.io/projected/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-kube-api-access-5fq64\") pod \"community-operators-dh2c2\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.269635 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.303070 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-catalog-content\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.303493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-utilities\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.303582 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssw4\" (UniqueName: \"kubernetes.io/projected/4f609a40-e848-4c4b-bb11-7dd03189cc76-kube-api-access-9ssw4\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.303625 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:54 crc kubenswrapper[4970]: E1209 12:08:54.306401 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.806383687 +0000 UTC m=+147.366864738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-lv8jl" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.405080 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.405292 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ssw4\" (UniqueName: \"kubernetes.io/projected/4f609a40-e848-4c4b-bb11-7dd03189cc76-kube-api-access-9ssw4\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.405343 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-catalog-content\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.405390 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-utilities\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.405775 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-utilities\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: E1209 12:08:54.405849 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 12:08:54.905833588 +0000 UTC m=+147.466314639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.406595 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-catalog-content\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.436318 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ssw4\" (UniqueName: \"kubernetes.io/projected/4f609a40-e848-4c4b-bb11-7dd03189cc76-kube-api-access-9ssw4\") pod \"certified-operators-7q4dt\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.445422 4970 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T12:08:53.684159141Z","Handler":null,"Name":""} Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.452468 4970 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.452509 4970 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.463923 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:54 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:54 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:54 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.463973 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.475792 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.475865 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcxnq"] Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.504779 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vwchd"] Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.507194 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.509637 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.509670 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.557764 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-lv8jl\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.592383 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dh2c2"] Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.608497 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.643063 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.746133 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7q4dt"] Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.804911 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.911932 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.911999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.912031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.912083 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.916372 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.917759 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.922009 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:54 crc kubenswrapper[4970]: I1209 12:08:54.925733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.036618 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.045782 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" event={"ID":"a71edd8a-18a6-4d93-a967-e8d0858a6220","Type":"ContainerStarted","Data":"f88f985d0a37c25e3373b226edd155bb55ced712e43e22b7deb730e133771194"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.045826 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" event={"ID":"a71edd8a-18a6-4d93-a967-e8d0858a6220","Type":"ContainerStarted","Data":"dea72d5a1d0ad49ded2cd6452797220ea584ddd90223522f659414794b770755"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.049870 4970 generic.go:334] "Generic (PLEG): container finished" podID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerID="0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f" exitCode=0 Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.049945 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwchd" event={"ID":"0119c7a7-a4a5-4364-9682-6de2fcd6f02f","Type":"ContainerDied","Data":"0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.049989 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwchd" event={"ID":"0119c7a7-a4a5-4364-9682-6de2fcd6f02f","Type":"ContainerStarted","Data":"484e925ef2f22bb1f1f6617924dcc24d3294cb465c285f96a97915213b959392"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.051944 4970 generic.go:334] "Generic (PLEG): container finished" podID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerID="bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81" exitCode=0 Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.051988 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7q4dt" event={"ID":"4f609a40-e848-4c4b-bb11-7dd03189cc76","Type":"ContainerDied","Data":"bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.052049 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7q4dt" event={"ID":"4f609a40-e848-4c4b-bb11-7dd03189cc76","Type":"ContainerStarted","Data":"bfd0bac91877a313f467e67395a3deb4e9fe2811753ad22fc9f88244a5aff762"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.052915 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.056581 4970 generic.go:334] "Generic (PLEG): container finished" podID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerID="fc81b36d2c62bf8099932172c73949881ec3610a96abe78fe64cb7b6de05f293" exitCode=0 Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.056665 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2c2" event={"ID":"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80","Type":"ContainerDied","Data":"fc81b36d2c62bf8099932172c73949881ec3610a96abe78fe64cb7b6de05f293"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.056695 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2c2" event={"ID":"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80","Type":"ContainerStarted","Data":"aef7ce9f88dba304002c35510d7921607a2d743b1d8b6e12761217468e96a8de"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.063328 4970 generic.go:334] "Generic (PLEG): container finished" podID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerID="7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769" exitCode=0 Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.064393 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerDied","Data":"7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.064426 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerStarted","Data":"154491ea8e785a8587354fc33cb1ae78c8b78c9cd15746a56bf8ecb7d60228d8"} Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.070496 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rm2zp" podStartSLOduration=12.070475187 podStartE2EDuration="12.070475187s" podCreationTimestamp="2025-12-09 12:08:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:55.070009962 +0000 UTC m=+147.630491023" watchObservedRunningTime="2025-12-09 12:08:55.070475187 +0000 UTC m=+147.630956238" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.125812 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.133832 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.263668 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv8jl"] Dec 09 12:08:55 crc kubenswrapper[4970]: W1209 12:08:55.332368 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod051b0190_5f8c_42e4_af6c_8a5dd401ef52.slice/crio-e182004abffec9e64507483cdcb7eab5f6d950ddf2979a705a4fd28a705b1980 WatchSource:0}: Error finding container e182004abffec9e64507483cdcb7eab5f6d950ddf2979a705a4fd28a705b1980: Status 404 returned error can't find the container with id e182004abffec9e64507483cdcb7eab5f6d950ddf2979a705a4fd28a705b1980 Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.459460 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:55 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:55 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:55 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.459754 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:55 crc kubenswrapper[4970]: W1209 12:08:55.468902 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-06455a8937f0b03e444fd7fef50c2a509932ab8ef15360ae3b6fd958dc5ef8ff WatchSource:0}: Error finding container 06455a8937f0b03e444fd7fef50c2a509932ab8ef15360ae3b6fd958dc5ef8ff: Status 404 returned error can't find the container with id 06455a8937f0b03e444fd7fef50c2a509932ab8ef15360ae3b6fd958dc5ef8ff Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.548342 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qz6sg"] Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.549294 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.554711 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.562769 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz6sg"] Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.622077 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-utilities\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.622119 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p47k\" (UniqueName: \"kubernetes.io/projected/75d4b427-7364-457e-b599-b36ff9459935-kube-api-access-9p47k\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.622185 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-catalog-content\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.649939 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.650493 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.655456 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.724053 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-utilities\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.724099 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p47k\" (UniqueName: \"kubernetes.io/projected/75d4b427-7364-457e-b599-b36ff9459935-kube-api-access-9p47k\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.724169 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-catalog-content\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.724873 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-catalog-content\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.725572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-utilities\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.749845 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p47k\" (UniqueName: \"kubernetes.io/projected/75d4b427-7364-457e-b599-b36ff9459935-kube-api-access-9p47k\") pod \"redhat-marketplace-qz6sg\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.805192 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-frbcj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.805561 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-frbcj" podUID="a96798ca-3ac0-4957-887c-763502663335" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.805204 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-frbcj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" start-of-body= Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.805713 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-frbcj" podUID="a96798ca-3ac0-4957-887c-763502663335" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.20:8080/\": dial tcp 10.217.0.20:8080: connect: connection refused" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.824162 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.883955 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.898365 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.898434 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.916962 4970 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g2x6q container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]log ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]etcd ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/max-in-flight-filter ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 09 12:08:55 crc kubenswrapper[4970]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/project.openshift.io-projectcache ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/openshift.io-startinformers ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 09 12:08:55 crc kubenswrapper[4970]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 09 12:08:55 crc kubenswrapper[4970]: livez check failed Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.917009 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" podUID="2a0f2c3d-f2dc-4414-8b20-da75dd6e50ca" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.954364 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7kd"] Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.955583 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.970611 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7kd"] Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.993482 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 12:08:55 crc kubenswrapper[4970]: I1209 12:08:55.994110 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:55.999137 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.000849 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.002200 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.028300 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-catalog-content\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.028423 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkv8\" (UniqueName: \"kubernetes.io/projected/7ff193ec-f8bf-47b9-98ce-221e7a77561e-kube-api-access-gfkv8\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.028471 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-utilities\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.073696 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" event={"ID":"051b0190-5f8c-42e4-af6c-8a5dd401ef52","Type":"ContainerStarted","Data":"f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.073751 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" event={"ID":"051b0190-5f8c-42e4-af6c-8a5dd401ef52","Type":"ContainerStarted","Data":"e182004abffec9e64507483cdcb7eab5f6d950ddf2979a705a4fd28a705b1980"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.074267 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.079290 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9d12c90de397cf09f3352018f9c5c4291caff576f2f108bdb20c9af012efe0a0"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.079318 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"06455a8937f0b03e444fd7fef50c2a509932ab8ef15360ae3b6fd958dc5ef8ff"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.091897 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c7e7d065b8b48d0269730ea8bb7b0a3dd519f2809c23ebd11cde921233af8a11"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.091934 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b1844de7b2cc48a13c4859924ae011cbcb9a8198c6e470098dbdd80acae5ad40"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.094365 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" podStartSLOduration=129.094353777 podStartE2EDuration="2m9.094353777s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:56.091759915 +0000 UTC m=+148.652240966" watchObservedRunningTime="2025-12-09 12:08:56.094353777 +0000 UTC m=+148.654834828" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.096104 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.098688 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0301536773f2f7e7d9b7dabe7b69017ea94cb66de00ac19575d2a4e1656cf178"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.098727 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c8eea1cde4c017bd7b481a0aab58091d0ebbd6d5fd3ebdc7b1e11e7258b15edc"} Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.104995 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-9vrft" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.130743 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-catalog-content\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.130801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68174681-e84a-4f59-9ebf-fba682ae1123-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.130832 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68174681-e84a-4f59-9ebf-fba682ae1123-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.130895 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfkv8\" (UniqueName: \"kubernetes.io/projected/7ff193ec-f8bf-47b9-98ce-221e7a77561e-kube-api-access-gfkv8\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.130931 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-utilities\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.131793 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-catalog-content\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.132856 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-utilities\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.176599 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfkv8\" (UniqueName: \"kubernetes.io/projected/7ff193ec-f8bf-47b9-98ce-221e7a77561e-kube-api-access-gfkv8\") pod \"redhat-marketplace-gx7kd\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.237911 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.238347 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.239454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68174681-e84a-4f59-9ebf-fba682ae1123-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.239509 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68174681-e84a-4f59-9ebf-fba682ae1123-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.241632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68174681-e84a-4f59-9ebf-fba682ae1123-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.243751 4970 patch_prober.go:28] interesting pod/console-f9d7485db-wqfw9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.243804 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wqfw9" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.280566 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.281826 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68174681-e84a-4f59-9ebf-fba682ae1123-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.320596 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.454276 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.456198 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:56 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:56 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:56 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.456344 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.523975 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz6sg"] Dec 09 12:08:56 crc kubenswrapper[4970]: W1209 12:08:56.546577 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75d4b427_7364_457e_b599_b36ff9459935.slice/crio-c2f165c449c0bfaa77cd754d9d8693d837962e98f21ffb05f64b79f596ef67ed WatchSource:0}: Error finding container c2f165c449c0bfaa77cd754d9d8693d837962e98f21ffb05f64b79f596ef67ed: Status 404 returned error can't find the container with id c2f165c449c0bfaa77cd754d9d8693d837962e98f21ffb05f64b79f596ef67ed Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.757564 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dlxn4"] Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.759108 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.763086 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.802013 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlxn4"] Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.802886 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.852919 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-utilities\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.852999 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-catalog-content\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.853045 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjztv\" (UniqueName: \"kubernetes.io/projected/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-kube-api-access-kjztv\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.931948 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7kd"] Dec 09 12:08:56 crc kubenswrapper[4970]: W1209 12:08:56.950428 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ff193ec_f8bf_47b9_98ce_221e7a77561e.slice/crio-6bd8424c222e8de3a2ef9ea84e0e8adf33ad0ca9c03d498a20b117f30eab17b2 WatchSource:0}: Error finding container 6bd8424c222e8de3a2ef9ea84e0e8adf33ad0ca9c03d498a20b117f30eab17b2: Status 404 returned error can't find the container with id 6bd8424c222e8de3a2ef9ea84e0e8adf33ad0ca9c03d498a20b117f30eab17b2 Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.954275 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjztv\" (UniqueName: \"kubernetes.io/projected/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-kube-api-access-kjztv\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.954370 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-utilities\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.954481 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-catalog-content\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.955598 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-utilities\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.956025 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-catalog-content\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:56 crc kubenswrapper[4970]: I1209 12:08:56.982880 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjztv\" (UniqueName: \"kubernetes.io/projected/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-kube-api-access-kjztv\") pod \"redhat-operators-dlxn4\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.083944 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.118190 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"68174681-e84a-4f59-9ebf-fba682ae1123","Type":"ContainerStarted","Data":"4783512d24a12943e69b8dd575eefc2a14520a77ef18e6c2db8e6cb610928ad4"} Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.121015 4970 generic.go:334] "Generic (PLEG): container finished" podID="75d4b427-7364-457e-b599-b36ff9459935" containerID="03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59" exitCode=0 Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.121069 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz6sg" event={"ID":"75d4b427-7364-457e-b599-b36ff9459935","Type":"ContainerDied","Data":"03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59"} Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.121087 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz6sg" event={"ID":"75d4b427-7364-457e-b599-b36ff9459935","Type":"ContainerStarted","Data":"c2f165c449c0bfaa77cd754d9d8693d837962e98f21ffb05f64b79f596ef67ed"} Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.123479 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7kd" event={"ID":"7ff193ec-f8bf-47b9-98ce-221e7a77561e","Type":"ContainerStarted","Data":"6bd8424c222e8de3a2ef9ea84e0e8adf33ad0ca9c03d498a20b117f30eab17b2"} Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.154140 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vppr"] Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.155507 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.159466 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vppr"] Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.258639 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-catalog-content\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.260765 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm9dt\" (UniqueName: \"kubernetes.io/projected/2f56c523-468e-4195-b60a-9e307910e2cf-kube-api-access-cm9dt\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.261497 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-utilities\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.304544 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dlxn4"] Dec 09 12:08:57 crc kubenswrapper[4970]: W1209 12:08:57.332830 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9eb07bf9_8f0a_48fe_b71b_dba75fd4fe16.slice/crio-990167994ddaf4d99ff0c38796c7bc2d3649b341f928ba16d90a66b794069499 WatchSource:0}: Error finding container 990167994ddaf4d99ff0c38796c7bc2d3649b341f928ba16d90a66b794069499: Status 404 returned error can't find the container with id 990167994ddaf4d99ff0c38796c7bc2d3649b341f928ba16d90a66b794069499 Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.363397 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm9dt\" (UniqueName: \"kubernetes.io/projected/2f56c523-468e-4195-b60a-9e307910e2cf-kube-api-access-cm9dt\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.363459 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-utilities\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.363564 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-catalog-content\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.364301 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-catalog-content\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.364915 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-utilities\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.384578 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm9dt\" (UniqueName: \"kubernetes.io/projected/2f56c523-468e-4195-b60a-9e307910e2cf-kube-api-access-cm9dt\") pod \"redhat-operators-2vppr\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.463466 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:57 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:57 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:57 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.463919 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.476515 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:08:57 crc kubenswrapper[4970]: I1209 12:08:57.757909 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vppr"] Dec 09 12:08:57 crc kubenswrapper[4970]: W1209 12:08:57.765065 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f56c523_468e_4195_b60a_9e307910e2cf.slice/crio-c5387dc608974106c5dbd37a50a3603be050ca74900065906f99e1f2c2632703 WatchSource:0}: Error finding container c5387dc608974106c5dbd37a50a3603be050ca74900065906f99e1f2c2632703: Status 404 returned error can't find the container with id c5387dc608974106c5dbd37a50a3603be050ca74900065906f99e1f2c2632703 Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.144061 4970 generic.go:334] "Generic (PLEG): container finished" podID="2f56c523-468e-4195-b60a-9e307910e2cf" containerID="3b22c7a82ee9119da364d40b019156ddff507c519fea73d9c49bc47b566bf566" exitCode=0 Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.144171 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vppr" event={"ID":"2f56c523-468e-4195-b60a-9e307910e2cf","Type":"ContainerDied","Data":"3b22c7a82ee9119da364d40b019156ddff507c519fea73d9c49bc47b566bf566"} Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.144337 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vppr" event={"ID":"2f56c523-468e-4195-b60a-9e307910e2cf","Type":"ContainerStarted","Data":"c5387dc608974106c5dbd37a50a3603be050ca74900065906f99e1f2c2632703"} Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.156953 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"68174681-e84a-4f59-9ebf-fba682ae1123","Type":"ContainerStarted","Data":"fcc859690008184f9473fadec97c0a5f42bbdcde2223ad4e6d2b0b5ae328e6be"} Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.164311 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerID="e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5" exitCode=0 Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.164384 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerDied","Data":"e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5"} Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.164415 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerStarted","Data":"990167994ddaf4d99ff0c38796c7bc2d3649b341f928ba16d90a66b794069499"} Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.177220 4970 generic.go:334] "Generic (PLEG): container finished" podID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerID="5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688" exitCode=0 Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.177537 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7kd" event={"ID":"7ff193ec-f8bf-47b9-98ce-221e7a77561e","Type":"ContainerDied","Data":"5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688"} Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.187190 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.187098584 podStartE2EDuration="3.187098584s" podCreationTimestamp="2025-12-09 12:08:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:08:58.181152965 +0000 UTC m=+150.741634016" watchObservedRunningTime="2025-12-09 12:08:58.187098584 +0000 UTC m=+150.747579645" Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.458141 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:58 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:58 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:58 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:58 crc kubenswrapper[4970]: I1209 12:08:58.458403 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:08:59 crc kubenswrapper[4970]: I1209 12:08:59.193381 4970 generic.go:334] "Generic (PLEG): container finished" podID="68174681-e84a-4f59-9ebf-fba682ae1123" containerID="fcc859690008184f9473fadec97c0a5f42bbdcde2223ad4e6d2b0b5ae328e6be" exitCode=0 Dec 09 12:08:59 crc kubenswrapper[4970]: I1209 12:08:59.193456 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"68174681-e84a-4f59-9ebf-fba682ae1123","Type":"ContainerDied","Data":"fcc859690008184f9473fadec97c0a5f42bbdcde2223ad4e6d2b0b5ae328e6be"} Dec 09 12:08:59 crc kubenswrapper[4970]: I1209 12:08:59.455478 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:08:59 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:08:59 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:08:59 crc kubenswrapper[4970]: healthz check failed Dec 09 12:08:59 crc kubenswrapper[4970]: I1209 12:08:59.455536 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.266819 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.267661 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.270738 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.272047 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.275108 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.314334 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.314498 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.415824 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.415977 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.416470 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.448376 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.456051 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:00 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:00 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:00 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.456118 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.465709 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.517432 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68174681-e84a-4f59-9ebf-fba682ae1123-kube-api-access\") pod \"68174681-e84a-4f59-9ebf-fba682ae1123\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.517516 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68174681-e84a-4f59-9ebf-fba682ae1123-kubelet-dir\") pod \"68174681-e84a-4f59-9ebf-fba682ae1123\" (UID: \"68174681-e84a-4f59-9ebf-fba682ae1123\") " Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.517846 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/68174681-e84a-4f59-9ebf-fba682ae1123-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "68174681-e84a-4f59-9ebf-fba682ae1123" (UID: "68174681-e84a-4f59-9ebf-fba682ae1123"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.521525 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68174681-e84a-4f59-9ebf-fba682ae1123-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "68174681-e84a-4f59-9ebf-fba682ae1123" (UID: "68174681-e84a-4f59-9ebf-fba682ae1123"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.601742 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.619120 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/68174681-e84a-4f59-9ebf-fba682ae1123-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.619152 4970 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68174681-e84a-4f59-9ebf-fba682ae1123-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.790563 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 12:09:00 crc kubenswrapper[4970]: W1209 12:09:00.801596 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2d5242d0_8bce_4843_8a06_4aadb2e66d8a.slice/crio-7205af96aee92d5ea929d61a401fd7535d22ef36443e5ef4d498880291c110e4 WatchSource:0}: Error finding container 7205af96aee92d5ea929d61a401fd7535d22ef36443e5ef4d498880291c110e4: Status 404 returned error can't find the container with id 7205af96aee92d5ea929d61a401fd7535d22ef36443e5ef4d498880291c110e4 Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.892619 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:09:00 crc kubenswrapper[4970]: I1209 12:09:00.897607 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g2x6q" Dec 09 12:09:01 crc kubenswrapper[4970]: I1209 12:09:01.210491 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"68174681-e84a-4f59-9ebf-fba682ae1123","Type":"ContainerDied","Data":"4783512d24a12943e69b8dd575eefc2a14520a77ef18e6c2db8e6cb610928ad4"} Dec 09 12:09:01 crc kubenswrapper[4970]: I1209 12:09:01.210515 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 12:09:01 crc kubenswrapper[4970]: I1209 12:09:01.210534 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4783512d24a12943e69b8dd575eefc2a14520a77ef18e6c2db8e6cb610928ad4" Dec 09 12:09:01 crc kubenswrapper[4970]: I1209 12:09:01.211714 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2d5242d0-8bce-4843-8a06-4aadb2e66d8a","Type":"ContainerStarted","Data":"7205af96aee92d5ea929d61a401fd7535d22ef36443e5ef4d498880291c110e4"} Dec 09 12:09:01 crc kubenswrapper[4970]: I1209 12:09:01.455727 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:01 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:01 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:01 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:01 crc kubenswrapper[4970]: I1209 12:09:01.455789 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:02 crc kubenswrapper[4970]: I1209 12:09:02.220210 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2d5242d0-8bce-4843-8a06-4aadb2e66d8a","Type":"ContainerStarted","Data":"cabf00aff65765ed50c3421d5b76f594d6813a8e0bc192c360fb0a82038b4eba"} Dec 09 12:09:02 crc kubenswrapper[4970]: I1209 12:09:02.457369 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:02 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:02 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:02 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:02 crc kubenswrapper[4970]: I1209 12:09:02.457423 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:02 crc kubenswrapper[4970]: I1209 12:09:02.874446 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-xnzx7" Dec 09 12:09:02 crc kubenswrapper[4970]: I1209 12:09:02.890287 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.89027272 podStartE2EDuration="2.89027272s" podCreationTimestamp="2025-12-09 12:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:09:02.236507219 +0000 UTC m=+154.796988270" watchObservedRunningTime="2025-12-09 12:09:02.89027272 +0000 UTC m=+155.450753771" Dec 09 12:09:03 crc kubenswrapper[4970]: I1209 12:09:03.228048 4970 generic.go:334] "Generic (PLEG): container finished" podID="2d5242d0-8bce-4843-8a06-4aadb2e66d8a" containerID="cabf00aff65765ed50c3421d5b76f594d6813a8e0bc192c360fb0a82038b4eba" exitCode=0 Dec 09 12:09:03 crc kubenswrapper[4970]: I1209 12:09:03.228088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2d5242d0-8bce-4843-8a06-4aadb2e66d8a","Type":"ContainerDied","Data":"cabf00aff65765ed50c3421d5b76f594d6813a8e0bc192c360fb0a82038b4eba"} Dec 09 12:09:03 crc kubenswrapper[4970]: I1209 12:09:03.454781 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:03 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:03 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:03 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:03 crc kubenswrapper[4970]: I1209 12:09:03.454857 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:04 crc kubenswrapper[4970]: I1209 12:09:04.466029 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:04 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:04 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:04 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:04 crc kubenswrapper[4970]: I1209 12:09:04.466353 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:05 crc kubenswrapper[4970]: I1209 12:09:05.455069 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:05 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:05 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:05 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:05 crc kubenswrapper[4970]: I1209 12:09:05.455433 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:05 crc kubenswrapper[4970]: I1209 12:09:05.827030 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-frbcj" Dec 09 12:09:06 crc kubenswrapper[4970]: I1209 12:09:06.236338 4970 patch_prober.go:28] interesting pod/console-f9d7485db-wqfw9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 09 12:09:06 crc kubenswrapper[4970]: I1209 12:09:06.236399 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-wqfw9" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 09 12:09:06 crc kubenswrapper[4970]: I1209 12:09:06.455789 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:06 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:06 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:06 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:06 crc kubenswrapper[4970]: I1209 12:09:06.455844 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:07 crc kubenswrapper[4970]: I1209 12:09:07.456433 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:07 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:07 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:07 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:07 crc kubenswrapper[4970]: I1209 12:09:07.456752 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:08 crc kubenswrapper[4970]: I1209 12:09:08.458646 4970 patch_prober.go:28] interesting pod/router-default-5444994796-zwlc4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 12:09:08 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Dec 09 12:09:08 crc kubenswrapper[4970]: [+]process-running ok Dec 09 12:09:08 crc kubenswrapper[4970]: healthz check failed Dec 09 12:09:08 crc kubenswrapper[4970]: I1209 12:09:08.458732 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-zwlc4" podUID="c115cc0e-4491-4b7b-83f3-349433ce3b08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 12:09:09 crc kubenswrapper[4970]: I1209 12:09:09.461025 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:09:09 crc kubenswrapper[4970]: I1209 12:09:09.463553 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-zwlc4" Dec 09 12:09:09 crc kubenswrapper[4970]: I1209 12:09:09.950181 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:09:09 crc kubenswrapper[4970]: I1209 12:09:09.959490 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5e10a28a-08f5-4679-9d90-532322e9e87f-metrics-certs\") pod \"network-metrics-daemon-cp4b2\" (UID: \"5e10a28a-08f5-4679-9d90-532322e9e87f\") " pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.139979 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cp4b2" Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.336109 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.357092 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kube-api-access\") pod \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.357220 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kubelet-dir\") pod \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\" (UID: \"2d5242d0-8bce-4843-8a06-4aadb2e66d8a\") " Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.358193 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2d5242d0-8bce-4843-8a06-4aadb2e66d8a" (UID: "2d5242d0-8bce-4843-8a06-4aadb2e66d8a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.360555 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d5242d0-8bce-4843-8a06-4aadb2e66d8a" (UID: "2d5242d0-8bce-4843-8a06-4aadb2e66d8a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.458654 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:10 crc kubenswrapper[4970]: I1209 12:09:10.458692 4970 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2d5242d0-8bce-4843-8a06-4aadb2e66d8a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:11 crc kubenswrapper[4970]: I1209 12:09:11.283230 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2d5242d0-8bce-4843-8a06-4aadb2e66d8a","Type":"ContainerDied","Data":"7205af96aee92d5ea929d61a401fd7535d22ef36443e5ef4d498880291c110e4"} Dec 09 12:09:11 crc kubenswrapper[4970]: I1209 12:09:11.283393 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7205af96aee92d5ea929d61a401fd7535d22ef36443e5ef4d498880291c110e4" Dec 09 12:09:11 crc kubenswrapper[4970]: I1209 12:09:11.283321 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 12:09:14 crc kubenswrapper[4970]: I1209 12:09:14.812357 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:09:16 crc kubenswrapper[4970]: I1209 12:09:16.011365 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:09:16 crc kubenswrapper[4970]: I1209 12:09:16.011416 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:09:16 crc kubenswrapper[4970]: I1209 12:09:16.253533 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:09:16 crc kubenswrapper[4970]: I1209 12:09:16.262668 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:09:25 crc kubenswrapper[4970]: I1209 12:09:25.225610 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 12:09:26 crc kubenswrapper[4970]: I1209 12:09:26.560279 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qmvms" Dec 09 12:09:27 crc kubenswrapper[4970]: E1209 12:09:27.453762 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 12:09:27 crc kubenswrapper[4970]: E1209 12:09:27.453996 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8cnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vwchd_openshift-marketplace(0119c7a7-a4a5-4364-9682-6de2fcd6f02f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:27 crc kubenswrapper[4970]: E1209 12:09:27.455190 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vwchd" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" Dec 09 12:09:29 crc kubenswrapper[4970]: E1209 12:09:29.386127 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 12:09:29 crc kubenswrapper[4970]: E1209 12:09:29.386315 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fq64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dh2c2_openshift-marketplace(3ef5d3a5-6c33-435e-b59b-a3f5f815cb80): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:29 crc kubenswrapper[4970]: E1209 12:09:29.387542 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dh2c2" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" Dec 09 12:09:30 crc kubenswrapper[4970]: E1209 12:09:30.114735 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dh2c2" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" Dec 09 12:09:30 crc kubenswrapper[4970]: E1209 12:09:30.115279 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vwchd" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.571294 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.571596 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ssw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7q4dt_openshift-marketplace(4f609a40-e848-4c4b-bb11-7dd03189cc76): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.573072 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7q4dt" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.621283 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.621452 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vfdh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qcxnq_openshift-marketplace(343921a4-b33b-485d-b5ac-84d3d87b6ed7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.622706 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qcxnq" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.785084 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.785536 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9p47k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qz6sg_openshift-marketplace(75d4b427-7364-457e-b599-b36ff9459935): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:31 crc kubenswrapper[4970]: E1209 12:09:31.786752 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qz6sg" podUID="75d4b427-7364-457e-b599-b36ff9459935" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.285016 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 12:09:33 crc kubenswrapper[4970]: E1209 12:09:33.285315 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68174681-e84a-4f59-9ebf-fba682ae1123" containerName="pruner" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.285330 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="68174681-e84a-4f59-9ebf-fba682ae1123" containerName="pruner" Dec 09 12:09:33 crc kubenswrapper[4970]: E1209 12:09:33.285358 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d5242d0-8bce-4843-8a06-4aadb2e66d8a" containerName="pruner" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.285365 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d5242d0-8bce-4843-8a06-4aadb2e66d8a" containerName="pruner" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.285492 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="68174681-e84a-4f59-9ebf-fba682ae1123" containerName="pruner" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.285515 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d5242d0-8bce-4843-8a06-4aadb2e66d8a" containerName="pruner" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.285962 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.288004 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.290998 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.291127 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.466271 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.466348 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.567769 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.567821 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.567842 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.589035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:33 crc kubenswrapper[4970]: I1209 12:09:33.611547 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.667207 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qz6sg" podUID="75d4b427-7364-457e-b599-b36ff9459935" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.667207 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qcxnq" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.667327 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7q4dt" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.757314 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.757473 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cm9dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2vppr_openshift-marketplace(2f56c523-468e-4195-b60a-9e307910e2cf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.759341 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2vppr" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.818898 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.819290 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjztv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dlxn4_openshift-marketplace(9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:09:34 crc kubenswrapper[4970]: E1209 12:09:34.820445 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dlxn4" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.071774 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cp4b2"] Dec 09 12:09:35 crc kubenswrapper[4970]: W1209 12:09:35.083488 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e10a28a_08f5_4679_9d90_532322e9e87f.slice/crio-4202f4fe122708dd4eee3c8b7716c9d176e6e302b20adb35bf5242ad54f37ded WatchSource:0}: Error finding container 4202f4fe122708dd4eee3c8b7716c9d176e6e302b20adb35bf5242ad54f37ded: Status 404 returned error can't find the container with id 4202f4fe122708dd4eee3c8b7716c9d176e6e302b20adb35bf5242ad54f37ded Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.161579 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 12:09:35 crc kubenswrapper[4970]: W1209 12:09:35.168628 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd43973e1_aa25_418f_a8b6_2c25028b7dc0.slice/crio-02ae6e0998242d5975ba57e4572268c3cb8facc73baa441f22a49f1d036cfc52 WatchSource:0}: Error finding container 02ae6e0998242d5975ba57e4572268c3cb8facc73baa441f22a49f1d036cfc52: Status 404 returned error can't find the container with id 02ae6e0998242d5975ba57e4572268c3cb8facc73baa441f22a49f1d036cfc52 Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.458711 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" event={"ID":"5e10a28a-08f5-4679-9d90-532322e9e87f","Type":"ContainerStarted","Data":"c21dc85849549d780bd8ba756492911c518e1e37d329cad61361a6ccf1bf9b33"} Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.459214 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" event={"ID":"5e10a28a-08f5-4679-9d90-532322e9e87f","Type":"ContainerStarted","Data":"4202f4fe122708dd4eee3c8b7716c9d176e6e302b20adb35bf5242ad54f37ded"} Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.463161 4970 generic.go:334] "Generic (PLEG): container finished" podID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerID="1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196" exitCode=0 Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.463271 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7kd" event={"ID":"7ff193ec-f8bf-47b9-98ce-221e7a77561e","Type":"ContainerDied","Data":"1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196"} Dec 09 12:09:35 crc kubenswrapper[4970]: I1209 12:09:35.468095 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d43973e1-aa25-418f-a8b6-2c25028b7dc0","Type":"ContainerStarted","Data":"02ae6e0998242d5975ba57e4572268c3cb8facc73baa441f22a49f1d036cfc52"} Dec 09 12:09:35 crc kubenswrapper[4970]: E1209 12:09:35.469735 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dlxn4" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" Dec 09 12:09:35 crc kubenswrapper[4970]: E1209 12:09:35.470156 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2vppr" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" Dec 09 12:09:36 crc kubenswrapper[4970]: I1209 12:09:36.473863 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cp4b2" event={"ID":"5e10a28a-08f5-4679-9d90-532322e9e87f","Type":"ContainerStarted","Data":"e56c29bc66148bb44ec886a38405ccf0b91d69dbf6bc0c6f8e0d12cdd5e1af94"} Dec 09 12:09:36 crc kubenswrapper[4970]: I1209 12:09:36.476242 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7kd" event={"ID":"7ff193ec-f8bf-47b9-98ce-221e7a77561e","Type":"ContainerStarted","Data":"66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478"} Dec 09 12:09:36 crc kubenswrapper[4970]: I1209 12:09:36.478788 4970 generic.go:334] "Generic (PLEG): container finished" podID="d43973e1-aa25-418f-a8b6-2c25028b7dc0" containerID="85131408c2ac35f52f7c4a840ec005509ff573971adbc8e200cc12e89d6ea5c9" exitCode=0 Dec 09 12:09:36 crc kubenswrapper[4970]: I1209 12:09:36.478835 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d43973e1-aa25-418f-a8b6-2c25028b7dc0","Type":"ContainerDied","Data":"85131408c2ac35f52f7c4a840ec005509ff573971adbc8e200cc12e89d6ea5c9"} Dec 09 12:09:36 crc kubenswrapper[4970]: I1209 12:09:36.511328 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cp4b2" podStartSLOduration=169.511308455 podStartE2EDuration="2m49.511308455s" podCreationTimestamp="2025-12-09 12:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:09:36.492511066 +0000 UTC m=+189.052992127" watchObservedRunningTime="2025-12-09 12:09:36.511308455 +0000 UTC m=+189.071789516" Dec 09 12:09:36 crc kubenswrapper[4970]: I1209 12:09:36.531301 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gx7kd" podStartSLOduration=3.838751269 podStartE2EDuration="41.531283052s" podCreationTimestamp="2025-12-09 12:08:55 +0000 UTC" firstStartedPulling="2025-12-09 12:08:58.180735511 +0000 UTC m=+150.741216562" lastFinishedPulling="2025-12-09 12:09:35.873267284 +0000 UTC m=+188.433748345" observedRunningTime="2025-12-09 12:09:36.514169636 +0000 UTC m=+189.074650727" watchObservedRunningTime="2025-12-09 12:09:36.531283052 +0000 UTC m=+189.091764123" Dec 09 12:09:37 crc kubenswrapper[4970]: I1209 12:09:37.858830 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.024795 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kubelet-dir\") pod \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.024997 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kube-api-access\") pod \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\" (UID: \"d43973e1-aa25-418f-a8b6-2c25028b7dc0\") " Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.024988 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d43973e1-aa25-418f-a8b6-2c25028b7dc0" (UID: "d43973e1-aa25-418f-a8b6-2c25028b7dc0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.025452 4970 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.034732 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d43973e1-aa25-418f-a8b6-2c25028b7dc0" (UID: "d43973e1-aa25-418f-a8b6-2c25028b7dc0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.125925 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d43973e1-aa25-418f-a8b6-2c25028b7dc0-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.490530 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d43973e1-aa25-418f-a8b6-2c25028b7dc0","Type":"ContainerDied","Data":"02ae6e0998242d5975ba57e4572268c3cb8facc73baa441f22a49f1d036cfc52"} Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.490580 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ae6e0998242d5975ba57e4572268c3cb8facc73baa441f22a49f1d036cfc52" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.490639 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 12:09:38 crc kubenswrapper[4970]: I1209 12:09:38.806075 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8txkd"] Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.481527 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 12:09:39 crc kubenswrapper[4970]: E1209 12:09:39.481910 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d43973e1-aa25-418f-a8b6-2c25028b7dc0" containerName="pruner" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.481930 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d43973e1-aa25-418f-a8b6-2c25028b7dc0" containerName="pruner" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.485862 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d43973e1-aa25-418f-a8b6-2c25028b7dc0" containerName="pruner" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.486643 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.486768 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.490355 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.490608 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.647342 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kube-api-access\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.647389 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-var-lock\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.647416 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.748121 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kube-api-access\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.748177 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-var-lock\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.748219 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.748349 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.748339 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-var-lock\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.769599 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kube-api-access\") pod \"installer-9-crc\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:39 crc kubenswrapper[4970]: I1209 12:09:39.803094 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:09:40 crc kubenswrapper[4970]: I1209 12:09:40.245218 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 12:09:40 crc kubenswrapper[4970]: I1209 12:09:40.500824 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f94c63f8-ba2b-46a1-b6c8-a018ff78e407","Type":"ContainerStarted","Data":"7c08318fc66cdbeaf17b403815ff512d5d6114657c5c8e976184b116fd57bd90"} Dec 09 12:09:41 crc kubenswrapper[4970]: I1209 12:09:41.506196 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f94c63f8-ba2b-46a1-b6c8-a018ff78e407","Type":"ContainerStarted","Data":"b2208118fbc7b39d5509fb2bd389aebdc3f7710e116221b3780df6a27ca108c0"} Dec 09 12:09:41 crc kubenswrapper[4970]: I1209 12:09:41.523630 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.523608516 podStartE2EDuration="2.523608516s" podCreationTimestamp="2025-12-09 12:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:09:41.519444774 +0000 UTC m=+194.079925825" watchObservedRunningTime="2025-12-09 12:09:41.523608516 +0000 UTC m=+194.084089607" Dec 09 12:09:43 crc kubenswrapper[4970]: I1209 12:09:43.518482 4970 generic.go:334] "Generic (PLEG): container finished" podID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerID="b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef" exitCode=0 Dec 09 12:09:43 crc kubenswrapper[4970]: I1209 12:09:43.518534 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwchd" event={"ID":"0119c7a7-a4a5-4364-9682-6de2fcd6f02f","Type":"ContainerDied","Data":"b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef"} Dec 09 12:09:45 crc kubenswrapper[4970]: I1209 12:09:45.532490 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwchd" event={"ID":"0119c7a7-a4a5-4364-9682-6de2fcd6f02f","Type":"ContainerStarted","Data":"1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5"} Dec 09 12:09:45 crc kubenswrapper[4970]: I1209 12:09:45.551738 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vwchd" podStartSLOduration=3.19328878 podStartE2EDuration="52.55172385s" podCreationTimestamp="2025-12-09 12:08:53 +0000 UTC" firstStartedPulling="2025-12-09 12:08:55.052595847 +0000 UTC m=+147.613076898" lastFinishedPulling="2025-12-09 12:09:44.411030917 +0000 UTC m=+196.971511968" observedRunningTime="2025-12-09 12:09:45.549571262 +0000 UTC m=+198.110052303" watchObservedRunningTime="2025-12-09 12:09:45.55172385 +0000 UTC m=+198.112204901" Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.011156 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.011221 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.281725 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.282078 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.337633 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.540011 4970 generic.go:334] "Generic (PLEG): container finished" podID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerID="97d6761603bda9d0c83011e3e32d4ed573e912db1ca92598f88ca778a7e9f1ec" exitCode=0 Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.540076 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2c2" event={"ID":"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80","Type":"ContainerDied","Data":"97d6761603bda9d0c83011e3e32d4ed573e912db1ca92598f88ca778a7e9f1ec"} Dec 09 12:09:46 crc kubenswrapper[4970]: I1209 12:09:46.585772 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:09:47 crc kubenswrapper[4970]: I1209 12:09:47.440984 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7kd"] Dec 09 12:09:47 crc kubenswrapper[4970]: I1209 12:09:47.558810 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2c2" event={"ID":"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80","Type":"ContainerStarted","Data":"dd0d0596ffa5f4ba51a0df11e86fa30303d6b57c36737c2224cbf686bc737b33"} Dec 09 12:09:47 crc kubenswrapper[4970]: I1209 12:09:47.561702 4970 generic.go:334] "Generic (PLEG): container finished" podID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerID="aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4" exitCode=0 Dec 09 12:09:47 crc kubenswrapper[4970]: I1209 12:09:47.561782 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7q4dt" event={"ID":"4f609a40-e848-4c4b-bb11-7dd03189cc76","Type":"ContainerDied","Data":"aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4"} Dec 09 12:09:47 crc kubenswrapper[4970]: I1209 12:09:47.564686 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerStarted","Data":"6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0"} Dec 09 12:09:47 crc kubenswrapper[4970]: I1209 12:09:47.581662 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dh2c2" podStartSLOduration=2.608048587 podStartE2EDuration="54.58164581s" podCreationTimestamp="2025-12-09 12:08:53 +0000 UTC" firstStartedPulling="2025-12-09 12:08:55.061818991 +0000 UTC m=+147.622300052" lastFinishedPulling="2025-12-09 12:09:47.035416224 +0000 UTC m=+199.595897275" observedRunningTime="2025-12-09 12:09:47.578880493 +0000 UTC m=+200.139361544" watchObservedRunningTime="2025-12-09 12:09:47.58164581 +0000 UTC m=+200.142126861" Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.575609 4970 generic.go:334] "Generic (PLEG): container finished" podID="2f56c523-468e-4195-b60a-9e307910e2cf" containerID="7a06b3aadcedf9ce0bb55e97590d895e3b678753dcc23de0d53c88ef3cf9c40d" exitCode=0 Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.575933 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vppr" event={"ID":"2f56c523-468e-4195-b60a-9e307910e2cf","Type":"ContainerDied","Data":"7a06b3aadcedf9ce0bb55e97590d895e3b678753dcc23de0d53c88ef3cf9c40d"} Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.579047 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7q4dt" event={"ID":"4f609a40-e848-4c4b-bb11-7dd03189cc76","Type":"ContainerStarted","Data":"181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5"} Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.582500 4970 generic.go:334] "Generic (PLEG): container finished" podID="75d4b427-7364-457e-b599-b36ff9459935" containerID="6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87" exitCode=0 Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.582558 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz6sg" event={"ID":"75d4b427-7364-457e-b599-b36ff9459935","Type":"ContainerDied","Data":"6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87"} Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.585972 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerID="6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0" exitCode=0 Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.586187 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gx7kd" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="registry-server" containerID="cri-o://66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478" gracePeriod=2 Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.586251 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerDied","Data":"6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0"} Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.659760 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7q4dt" podStartSLOduration=1.737012862 podStartE2EDuration="54.659742591s" podCreationTimestamp="2025-12-09 12:08:54 +0000 UTC" firstStartedPulling="2025-12-09 12:08:55.062172832 +0000 UTC m=+147.622653883" lastFinishedPulling="2025-12-09 12:09:47.984902561 +0000 UTC m=+200.545383612" observedRunningTime="2025-12-09 12:09:48.658931025 +0000 UTC m=+201.219412076" watchObservedRunningTime="2025-12-09 12:09:48.659742591 +0000 UTC m=+201.220223632" Dec 09 12:09:48 crc kubenswrapper[4970]: I1209 12:09:48.991352 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.167995 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfkv8\" (UniqueName: \"kubernetes.io/projected/7ff193ec-f8bf-47b9-98ce-221e7a77561e-kube-api-access-gfkv8\") pod \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.168120 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-catalog-content\") pod \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.168196 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-utilities\") pod \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\" (UID: \"7ff193ec-f8bf-47b9-98ce-221e7a77561e\") " Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.168975 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-utilities" (OuterVolumeSpecName: "utilities") pod "7ff193ec-f8bf-47b9-98ce-221e7a77561e" (UID: "7ff193ec-f8bf-47b9-98ce-221e7a77561e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.176266 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff193ec-f8bf-47b9-98ce-221e7a77561e-kube-api-access-gfkv8" (OuterVolumeSpecName: "kube-api-access-gfkv8") pod "7ff193ec-f8bf-47b9-98ce-221e7a77561e" (UID: "7ff193ec-f8bf-47b9-98ce-221e7a77561e"). InnerVolumeSpecName "kube-api-access-gfkv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.193035 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ff193ec-f8bf-47b9-98ce-221e7a77561e" (UID: "7ff193ec-f8bf-47b9-98ce-221e7a77561e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.269730 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.269778 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ff193ec-f8bf-47b9-98ce-221e7a77561e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.269793 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfkv8\" (UniqueName: \"kubernetes.io/projected/7ff193ec-f8bf-47b9-98ce-221e7a77561e-kube-api-access-gfkv8\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.593520 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz6sg" event={"ID":"75d4b427-7364-457e-b599-b36ff9459935","Type":"ContainerStarted","Data":"f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee"} Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.596226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerStarted","Data":"664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e"} Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.598619 4970 generic.go:334] "Generic (PLEG): container finished" podID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerID="66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478" exitCode=0 Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.598658 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gx7kd" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.598683 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7kd" event={"ID":"7ff193ec-f8bf-47b9-98ce-221e7a77561e","Type":"ContainerDied","Data":"66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478"} Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.598714 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gx7kd" event={"ID":"7ff193ec-f8bf-47b9-98ce-221e7a77561e","Type":"ContainerDied","Data":"6bd8424c222e8de3a2ef9ea84e0e8adf33ad0ca9c03d498a20b117f30eab17b2"} Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.598742 4970 scope.go:117] "RemoveContainer" containerID="66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.600914 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vppr" event={"ID":"2f56c523-468e-4195-b60a-9e307910e2cf","Type":"ContainerStarted","Data":"1b533479cc2c5e59c3e304484301628c98ced1632d4a6972a03cb9c9069a6f75"} Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.611553 4970 scope.go:117] "RemoveContainer" containerID="1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.621336 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qz6sg" podStartSLOduration=2.6712402490000002 podStartE2EDuration="54.621316346s" podCreationTimestamp="2025-12-09 12:08:55 +0000 UTC" firstStartedPulling="2025-12-09 12:08:57.122480794 +0000 UTC m=+149.682961845" lastFinishedPulling="2025-12-09 12:09:49.072556891 +0000 UTC m=+201.633037942" observedRunningTime="2025-12-09 12:09:49.618685584 +0000 UTC m=+202.179166645" watchObservedRunningTime="2025-12-09 12:09:49.621316346 +0000 UTC m=+202.181797397" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.630682 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7kd"] Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.637647 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gx7kd"] Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.639288 4970 scope.go:117] "RemoveContainer" containerID="5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.651233 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dlxn4" podStartSLOduration=2.836002262 podStartE2EDuration="53.651215057s" podCreationTimestamp="2025-12-09 12:08:56 +0000 UTC" firstStartedPulling="2025-12-09 12:08:58.170829415 +0000 UTC m=+150.731310466" lastFinishedPulling="2025-12-09 12:09:48.98604221 +0000 UTC m=+201.546523261" observedRunningTime="2025-12-09 12:09:49.648872633 +0000 UTC m=+202.209353674" watchObservedRunningTime="2025-12-09 12:09:49.651215057 +0000 UTC m=+202.211696108" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.652837 4970 scope.go:117] "RemoveContainer" containerID="66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478" Dec 09 12:09:49 crc kubenswrapper[4970]: E1209 12:09:49.653691 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478\": container with ID starting with 66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478 not found: ID does not exist" containerID="66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.653740 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478"} err="failed to get container status \"66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478\": rpc error: code = NotFound desc = could not find container \"66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478\": container with ID starting with 66414bea133b4863bcb7aa0b1feaa9ad7795c0ee1e8ddc4149fdfbf1437d7478 not found: ID does not exist" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.653786 4970 scope.go:117] "RemoveContainer" containerID="1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196" Dec 09 12:09:49 crc kubenswrapper[4970]: E1209 12:09:49.654200 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196\": container with ID starting with 1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196 not found: ID does not exist" containerID="1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.654232 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196"} err="failed to get container status \"1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196\": rpc error: code = NotFound desc = could not find container \"1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196\": container with ID starting with 1c453a98d4f974eef4b8546acc27bc5d6aebcd7698e4d38e942a7f97fca3a196 not found: ID does not exist" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.654271 4970 scope.go:117] "RemoveContainer" containerID="5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688" Dec 09 12:09:49 crc kubenswrapper[4970]: E1209 12:09:49.654670 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688\": container with ID starting with 5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688 not found: ID does not exist" containerID="5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.654707 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688"} err="failed to get container status \"5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688\": rpc error: code = NotFound desc = could not find container \"5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688\": container with ID starting with 5057de7d55d2706679cf0bcc044822c4d868f250c42994b6b86c68f4d2fcc688 not found: ID does not exist" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.667505 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vppr" podStartSLOduration=1.741276273 podStartE2EDuration="52.667483508s" podCreationTimestamp="2025-12-09 12:08:57 +0000 UTC" firstStartedPulling="2025-12-09 12:08:58.148733771 +0000 UTC m=+150.709214822" lastFinishedPulling="2025-12-09 12:09:49.074941006 +0000 UTC m=+201.635422057" observedRunningTime="2025-12-09 12:09:49.664533365 +0000 UTC m=+202.225014416" watchObservedRunningTime="2025-12-09 12:09:49.667483508 +0000 UTC m=+202.227964559" Dec 09 12:09:49 crc kubenswrapper[4970]: I1209 12:09:49.819100 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" path="/var/lib/kubelet/pods/7ff193ec-f8bf-47b9-98ce-221e7a77561e/volumes" Dec 09 12:09:50 crc kubenswrapper[4970]: I1209 12:09:50.607949 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerStarted","Data":"2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71"} Dec 09 12:09:51 crc kubenswrapper[4970]: I1209 12:09:51.614507 4970 generic.go:334] "Generic (PLEG): container finished" podID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerID="2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71" exitCode=0 Dec 09 12:09:51 crc kubenswrapper[4970]: I1209 12:09:51.614614 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerDied","Data":"2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71"} Dec 09 12:09:53 crc kubenswrapper[4970]: I1209 12:09:53.930231 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:09:53 crc kubenswrapper[4970]: I1209 12:09:53.931453 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.008981 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.270446 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.270494 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.310034 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.476370 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.476659 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.514087 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.676547 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.677456 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:09:54 crc kubenswrapper[4970]: I1209 12:09:54.680486 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:09:55 crc kubenswrapper[4970]: I1209 12:09:55.840716 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7q4dt"] Dec 09 12:09:55 crc kubenswrapper[4970]: I1209 12:09:55.884389 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:09:55 crc kubenswrapper[4970]: I1209 12:09:55.884464 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:09:55 crc kubenswrapper[4970]: I1209 12:09:55.925813 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:09:56 crc kubenswrapper[4970]: I1209 12:09:56.638924 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7q4dt" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="registry-server" containerID="cri-o://181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5" gracePeriod=2 Dec 09 12:09:56 crc kubenswrapper[4970]: I1209 12:09:56.679698 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:09:56 crc kubenswrapper[4970]: I1209 12:09:56.839327 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dh2c2"] Dec 09 12:09:56 crc kubenswrapper[4970]: I1209 12:09:56.839542 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dh2c2" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="registry-server" containerID="cri-o://dd0d0596ffa5f4ba51a0df11e86fa30303d6b57c36737c2224cbf686bc737b33" gracePeriod=2 Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.084655 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.084707 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.127834 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.476964 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.477510 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.518905 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.681677 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:09:57 crc kubenswrapper[4970]: I1209 12:09:57.706139 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.456227 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.589458 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-catalog-content\") pod \"4f609a40-e848-4c4b-bb11-7dd03189cc76\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.589594 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ssw4\" (UniqueName: \"kubernetes.io/projected/4f609a40-e848-4c4b-bb11-7dd03189cc76-kube-api-access-9ssw4\") pod \"4f609a40-e848-4c4b-bb11-7dd03189cc76\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.589651 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-utilities\") pod \"4f609a40-e848-4c4b-bb11-7dd03189cc76\" (UID: \"4f609a40-e848-4c4b-bb11-7dd03189cc76\") " Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.590423 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-utilities" (OuterVolumeSpecName: "utilities") pod "4f609a40-e848-4c4b-bb11-7dd03189cc76" (UID: "4f609a40-e848-4c4b-bb11-7dd03189cc76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.597662 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f609a40-e848-4c4b-bb11-7dd03189cc76-kube-api-access-9ssw4" (OuterVolumeSpecName: "kube-api-access-9ssw4") pod "4f609a40-e848-4c4b-bb11-7dd03189cc76" (UID: "4f609a40-e848-4c4b-bb11-7dd03189cc76"). InnerVolumeSpecName "kube-api-access-9ssw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.652392 4970 generic.go:334] "Generic (PLEG): container finished" podID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerID="dd0d0596ffa5f4ba51a0df11e86fa30303d6b57c36737c2224cbf686bc737b33" exitCode=0 Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.652453 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2c2" event={"ID":"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80","Type":"ContainerDied","Data":"dd0d0596ffa5f4ba51a0df11e86fa30303d6b57c36737c2224cbf686bc737b33"} Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.654408 4970 generic.go:334] "Generic (PLEG): container finished" podID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerID="181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5" exitCode=0 Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.654495 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7q4dt" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.654541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7q4dt" event={"ID":"4f609a40-e848-4c4b-bb11-7dd03189cc76","Type":"ContainerDied","Data":"181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5"} Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.654571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7q4dt" event={"ID":"4f609a40-e848-4c4b-bb11-7dd03189cc76","Type":"ContainerDied","Data":"bfd0bac91877a313f467e67395a3deb4e9fe2811753ad22fc9f88244a5aff762"} Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.654600 4970 scope.go:117] "RemoveContainer" containerID="181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.660563 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f609a40-e848-4c4b-bb11-7dd03189cc76" (UID: "4f609a40-e848-4c4b-bb11-7dd03189cc76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.673823 4970 scope.go:117] "RemoveContainer" containerID="aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.691643 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.691684 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ssw4\" (UniqueName: \"kubernetes.io/projected/4f609a40-e848-4c4b-bb11-7dd03189cc76-kube-api-access-9ssw4\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.691696 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f609a40-e848-4c4b-bb11-7dd03189cc76-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.695178 4970 scope.go:117] "RemoveContainer" containerID="bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.712011 4970 scope.go:117] "RemoveContainer" containerID="181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5" Dec 09 12:09:58 crc kubenswrapper[4970]: E1209 12:09:58.712518 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5\": container with ID starting with 181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5 not found: ID does not exist" containerID="181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.712599 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5"} err="failed to get container status \"181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5\": rpc error: code = NotFound desc = could not find container \"181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5\": container with ID starting with 181c6418543e09e9eda24c565fb22c5c7852445eb1ec2d5bb7b7d29011c0cae5 not found: ID does not exist" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.712645 4970 scope.go:117] "RemoveContainer" containerID="aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4" Dec 09 12:09:58 crc kubenswrapper[4970]: E1209 12:09:58.713578 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4\": container with ID starting with aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4 not found: ID does not exist" containerID="aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.713610 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4"} err="failed to get container status \"aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4\": rpc error: code = NotFound desc = could not find container \"aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4\": container with ID starting with aad1a602f199a754b8dc64a5162cb0e903532835f5d6c3f1ca6c43e817cf59b4 not found: ID does not exist" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.713631 4970 scope.go:117] "RemoveContainer" containerID="bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81" Dec 09 12:09:58 crc kubenswrapper[4970]: E1209 12:09:58.713929 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81\": container with ID starting with bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81 not found: ID does not exist" containerID="bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.713969 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81"} err="failed to get container status \"bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81\": rpc error: code = NotFound desc = could not find container \"bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81\": container with ID starting with bcaae4cf82eff447213201df58b97372a151fdae245d10b1bd8c8784572b7b81 not found: ID does not exist" Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.987320 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7q4dt"] Dec 09 12:09:58 crc kubenswrapper[4970]: I1209 12:09:58.991138 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7q4dt"] Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.260777 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.405531 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-utilities\") pod \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.405619 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-catalog-content\") pod \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.405680 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fq64\" (UniqueName: \"kubernetes.io/projected/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-kube-api-access-5fq64\") pod \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\" (UID: \"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80\") " Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.406915 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-utilities" (OuterVolumeSpecName: "utilities") pod "3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" (UID: "3ef5d3a5-6c33-435e-b59b-a3f5f815cb80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.413462 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-kube-api-access-5fq64" (OuterVolumeSpecName: "kube-api-access-5fq64") pod "3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" (UID: "3ef5d3a5-6c33-435e-b59b-a3f5f815cb80"). InnerVolumeSpecName "kube-api-access-5fq64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.453712 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" (UID: "3ef5d3a5-6c33-435e-b59b-a3f5f815cb80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.507760 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.507820 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.507835 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fq64\" (UniqueName: \"kubernetes.io/projected/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80-kube-api-access-5fq64\") on node \"crc\" DevicePath \"\"" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.663638 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dh2c2" event={"ID":"3ef5d3a5-6c33-435e-b59b-a3f5f815cb80","Type":"ContainerDied","Data":"aef7ce9f88dba304002c35510d7921607a2d743b1d8b6e12761217468e96a8de"} Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.663678 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dh2c2" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.663697 4970 scope.go:117] "RemoveContainer" containerID="dd0d0596ffa5f4ba51a0df11e86fa30303d6b57c36737c2224cbf686bc737b33" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.668549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerStarted","Data":"5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7"} Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.687331 4970 scope.go:117] "RemoveContainer" containerID="97d6761603bda9d0c83011e3e32d4ed573e912db1ca92598f88ca778a7e9f1ec" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.693129 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dh2c2"] Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.700657 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dh2c2"] Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.706367 4970 scope.go:117] "RemoveContainer" containerID="fc81b36d2c62bf8099932172c73949881ec3610a96abe78fe64cb7b6de05f293" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.818500 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" path="/var/lib/kubelet/pods/3ef5d3a5-6c33-435e-b59b-a3f5f815cb80/volumes" Dec 09 12:09:59 crc kubenswrapper[4970]: I1209 12:09:59.819057 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" path="/var/lib/kubelet/pods/4f609a40-e848-4c4b-bb11-7dd03189cc76/volumes" Dec 09 12:10:00 crc kubenswrapper[4970]: I1209 12:10:00.703488 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcxnq" podStartSLOduration=4.508500596 podStartE2EDuration="1m7.703461779s" podCreationTimestamp="2025-12-09 12:08:53 +0000 UTC" firstStartedPulling="2025-12-09 12:08:55.066167729 +0000 UTC m=+147.626648780" lastFinishedPulling="2025-12-09 12:09:58.261128902 +0000 UTC m=+210.821609963" observedRunningTime="2025-12-09 12:10:00.702325764 +0000 UTC m=+213.262806855" watchObservedRunningTime="2025-12-09 12:10:00.703461779 +0000 UTC m=+213.263942870" Dec 09 12:10:01 crc kubenswrapper[4970]: I1209 12:10:01.244577 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vppr"] Dec 09 12:10:01 crc kubenswrapper[4970]: I1209 12:10:01.244932 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vppr" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="registry-server" containerID="cri-o://1b533479cc2c5e59c3e304484301628c98ced1632d4a6972a03cb9c9069a6f75" gracePeriod=2 Dec 09 12:10:01 crc kubenswrapper[4970]: I1209 12:10:01.690233 4970 generic.go:334] "Generic (PLEG): container finished" podID="2f56c523-468e-4195-b60a-9e307910e2cf" containerID="1b533479cc2c5e59c3e304484301628c98ced1632d4a6972a03cb9c9069a6f75" exitCode=0 Dec 09 12:10:01 crc kubenswrapper[4970]: I1209 12:10:01.690318 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vppr" event={"ID":"2f56c523-468e-4195-b60a-9e307910e2cf","Type":"ContainerDied","Data":"1b533479cc2c5e59c3e304484301628c98ced1632d4a6972a03cb9c9069a6f75"} Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.229691 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.345776 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-utilities\") pod \"2f56c523-468e-4195-b60a-9e307910e2cf\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.345955 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm9dt\" (UniqueName: \"kubernetes.io/projected/2f56c523-468e-4195-b60a-9e307910e2cf-kube-api-access-cm9dt\") pod \"2f56c523-468e-4195-b60a-9e307910e2cf\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.346048 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-catalog-content\") pod \"2f56c523-468e-4195-b60a-9e307910e2cf\" (UID: \"2f56c523-468e-4195-b60a-9e307910e2cf\") " Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.349183 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-utilities" (OuterVolumeSpecName: "utilities") pod "2f56c523-468e-4195-b60a-9e307910e2cf" (UID: "2f56c523-468e-4195-b60a-9e307910e2cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.351858 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f56c523-468e-4195-b60a-9e307910e2cf-kube-api-access-cm9dt" (OuterVolumeSpecName: "kube-api-access-cm9dt") pod "2f56c523-468e-4195-b60a-9e307910e2cf" (UID: "2f56c523-468e-4195-b60a-9e307910e2cf"). InnerVolumeSpecName "kube-api-access-cm9dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.448342 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.449574 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm9dt\" (UniqueName: \"kubernetes.io/projected/2f56c523-468e-4195-b60a-9e307910e2cf-kube-api-access-cm9dt\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.517946 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f56c523-468e-4195-b60a-9e307910e2cf" (UID: "2f56c523-468e-4195-b60a-9e307910e2cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.551508 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f56c523-468e-4195-b60a-9e307910e2cf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.698324 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vppr" event={"ID":"2f56c523-468e-4195-b60a-9e307910e2cf","Type":"ContainerDied","Data":"c5387dc608974106c5dbd37a50a3603be050ca74900065906f99e1f2c2632703"} Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.698390 4970 scope.go:117] "RemoveContainer" containerID="1b533479cc2c5e59c3e304484301628c98ced1632d4a6972a03cb9c9069a6f75" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.698445 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vppr" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.716821 4970 scope.go:117] "RemoveContainer" containerID="7a06b3aadcedf9ce0bb55e97590d895e3b678753dcc23de0d53c88ef3cf9c40d" Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.734028 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vppr"] Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.737133 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vppr"] Dec 09 12:10:02 crc kubenswrapper[4970]: I1209 12:10:02.741486 4970 scope.go:117] "RemoveContainer" containerID="3b22c7a82ee9119da364d40b019156ddff507c519fea73d9c49bc47b566bf566" Dec 09 12:10:03 crc kubenswrapper[4970]: I1209 12:10:03.820890 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" path="/var/lib/kubelet/pods/2f56c523-468e-4195-b60a-9e307910e2cf/volumes" Dec 09 12:10:03 crc kubenswrapper[4970]: I1209 12:10:03.833597 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" podUID="71161079-7313-4f68-b716-a4650e0af898" containerName="oauth-openshift" containerID="cri-o://5fa79c4a01534940544b556b9a0d80a6a7080db0df288c880bd28566b7df36af" gracePeriod=15 Dec 09 12:10:04 crc kubenswrapper[4970]: I1209 12:10:04.090508 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:10:04 crc kubenswrapper[4970]: I1209 12:10:04.090569 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:10:04 crc kubenswrapper[4970]: I1209 12:10:04.127816 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:10:04 crc kubenswrapper[4970]: I1209 12:10:04.710588 4970 generic.go:334] "Generic (PLEG): container finished" podID="71161079-7313-4f68-b716-a4650e0af898" containerID="5fa79c4a01534940544b556b9a0d80a6a7080db0df288c880bd28566b7df36af" exitCode=0 Dec 09 12:10:04 crc kubenswrapper[4970]: I1209 12:10:04.710713 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" event={"ID":"71161079-7313-4f68-b716-a4650e0af898","Type":"ContainerDied","Data":"5fa79c4a01534940544b556b9a0d80a6a7080db0df288c880bd28566b7df36af"} Dec 09 12:10:04 crc kubenswrapper[4970]: I1209 12:10:04.757456 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.187537 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.283804 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-provider-selection\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.283885 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-service-ca\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.283924 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-audit-policies\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.283965 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-session\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284011 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-serving-cert\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284049 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-trusted-ca-bundle\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284093 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-cliconfig\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284138 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-error\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71161079-7313-4f68-b716-a4650e0af898-audit-dir\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284251 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-login\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284294 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71161079-7313-4f68-b716-a4650e0af898-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284331 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lmf2\" (UniqueName: \"kubernetes.io/projected/71161079-7313-4f68-b716-a4650e0af898-kube-api-access-5lmf2\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284370 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-idp-0-file-data\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284429 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-router-certs\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284468 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-ocp-branding-template\") pod \"71161079-7313-4f68-b716-a4650e0af898\" (UID: \"71161079-7313-4f68-b716-a4650e0af898\") " Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284713 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284759 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.284775 4970 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71161079-7313-4f68-b716-a4650e0af898-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.285598 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.286100 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.289769 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.289825 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71161079-7313-4f68-b716-a4650e0af898-kube-api-access-5lmf2" (OuterVolumeSpecName: "kube-api-access-5lmf2") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "kube-api-access-5lmf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.290207 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.294424 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.294686 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.295032 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.295247 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.295488 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.295717 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "71161079-7313-4f68-b716-a4650e0af898" (UID: "71161079-7313-4f68-b716-a4650e0af898"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386308 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386347 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386361 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lmf2\" (UniqueName: \"kubernetes.io/projected/71161079-7313-4f68-b716-a4650e0af898-kube-api-access-5lmf2\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386373 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386388 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386401 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386416 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386430 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386441 4970 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386454 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386466 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386477 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.386487 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/71161079-7313-4f68-b716-a4650e0af898-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.720589 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.720589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-8txkd" event={"ID":"71161079-7313-4f68-b716-a4650e0af898","Type":"ContainerDied","Data":"ee962df48d475c43f3cec9366a8a618f2b52ba156c229ebe6465297ac37864d1"} Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.721093 4970 scope.go:117] "RemoveContainer" containerID="5fa79c4a01534940544b556b9a0d80a6a7080db0df288c880bd28566b7df36af" Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.758818 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8txkd"] Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.762623 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-8txkd"] Dec 09 12:10:05 crc kubenswrapper[4970]: I1209 12:10:05.820152 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71161079-7313-4f68-b716-a4650e0af898" path="/var/lib/kubelet/pods/71161079-7313-4f68-b716-a4650e0af898/volumes" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.106721 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-54bd787995-2px6c"] Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107187 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107199 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107207 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107213 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107226 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107232 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107240 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107261 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107269 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107275 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107285 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107290 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107299 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107306 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107313 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107318 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107329 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71161079-7313-4f68-b716-a4650e0af898" containerName="oauth-openshift" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107335 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="71161079-7313-4f68-b716-a4650e0af898" containerName="oauth-openshift" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107343 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107348 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107356 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107361 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107369 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107374 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="extract-utilities" Dec 09 12:10:11 crc kubenswrapper[4970]: E1209 12:10:11.107383 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107389 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="extract-content" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107473 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff193ec-f8bf-47b9-98ce-221e7a77561e" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107481 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="71161079-7313-4f68-b716-a4650e0af898" containerName="oauth-openshift" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107490 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef5d3a5-6c33-435e-b59b-a3f5f815cb80" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107499 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f609a40-e848-4c4b-bb11-7dd03189cc76" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107510 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f56c523-468e-4195-b60a-9e307910e2cf" containerName="registry-server" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.107873 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.114917 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.115182 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.115916 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.116128 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.116355 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.116506 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.116926 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.117035 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.117056 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.117075 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.117081 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.117302 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.131607 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.136222 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.144205 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.150354 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54bd787995-2px6c"] Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.257971 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-audit-dir\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.258473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-login\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.258713 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.259007 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.259235 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.259519 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.259718 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.259967 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bp8v\" (UniqueName: \"kubernetes.io/projected/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-kube-api-access-9bp8v\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.260348 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-session\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.260732 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.261088 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-error\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.261457 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.261732 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-audit-policies\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.261949 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363228 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bp8v\" (UniqueName: \"kubernetes.io/projected/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-kube-api-access-9bp8v\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363262 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-session\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363291 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363321 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-error\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363339 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363371 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-audit-policies\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363394 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363423 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-audit-dir\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363450 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-login\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363474 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363493 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.363517 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.364759 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-audit-dir\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.365297 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-service-ca\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.365474 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-audit-policies\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.365700 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.367077 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.372549 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.373086 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-router-certs\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.373393 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-error\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.377788 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-login\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.378365 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.379638 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.381778 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-session\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.391503 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bp8v\" (UniqueName: \"kubernetes.io/projected/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-kube-api-access-9bp8v\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.392479 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-54bd787995-2px6c\" (UID: \"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6\") " pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.424128 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:11 crc kubenswrapper[4970]: I1209 12:10:11.856097 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-54bd787995-2px6c"] Dec 09 12:10:11 crc kubenswrapper[4970]: W1209 12:10:11.868837 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f6b07c_bc59_44ce_bde6_11b02ffc4ac6.slice/crio-22dd979d84183462e653140280d9c51a85712f8dcc6609527bd81ea593f62084 WatchSource:0}: Error finding container 22dd979d84183462e653140280d9c51a85712f8dcc6609527bd81ea593f62084: Status 404 returned error can't find the container with id 22dd979d84183462e653140280d9c51a85712f8dcc6609527bd81ea593f62084 Dec 09 12:10:12 crc kubenswrapper[4970]: I1209 12:10:12.760957 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" event={"ID":"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6","Type":"ContainerStarted","Data":"7f79ed35d3f45fc314516cad2382e4f9da0fec0e0ac9b987675389a5f3a0689e"} Dec 09 12:10:12 crc kubenswrapper[4970]: I1209 12:10:12.761035 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" event={"ID":"b5f6b07c-bc59-44ce-bde6-11b02ffc4ac6","Type":"ContainerStarted","Data":"22dd979d84183462e653140280d9c51a85712f8dcc6609527bd81ea593f62084"} Dec 09 12:10:12 crc kubenswrapper[4970]: I1209 12:10:12.761198 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:12 crc kubenswrapper[4970]: I1209 12:10:12.782483 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" podStartSLOduration=34.782465699 podStartE2EDuration="34.782465699s" podCreationTimestamp="2025-12-09 12:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:10:12.781148088 +0000 UTC m=+225.341629139" watchObservedRunningTime="2025-12-09 12:10:12.782465699 +0000 UTC m=+225.342946760" Dec 09 12:10:13 crc kubenswrapper[4970]: I1209 12:10:13.217441 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-54bd787995-2px6c" Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.010825 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.010961 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.011037 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.012169 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.012311 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf" gracePeriod=600 Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.791619 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf" exitCode=0 Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.791774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf"} Dec 09 12:10:16 crc kubenswrapper[4970]: I1209 12:10:16.792202 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"ceaaeecb8c51f67adfcf9a5e80757db7376be95abcbbe814f7fcd0b58bf39bf2"} Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.783286 4970 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784111 4970 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784131 4970 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784230 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784241 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784283 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784292 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784303 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784309 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784321 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784327 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784324 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784555 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d" gracePeriod=15 Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784340 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784719 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784741 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784747 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 12:10:18 crc kubenswrapper[4970]: E1209 12:10:18.784754 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784771 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784948 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784982 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.784991 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785000 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785009 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785083 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44" gracePeriod=15 Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785134 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039" gracePeriod=15 Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785163 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7" gracePeriod=15 Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785184 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.785190 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379" gracePeriod=15 Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.789063 4970 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.818845 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973417 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973677 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973741 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973839 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973853 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973878 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:18 crc kubenswrapper[4970]: I1209 12:10:18.973927 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075092 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075152 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075194 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075204 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075236 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075283 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075286 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075214 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075340 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075368 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075392 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075405 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075471 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075482 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075545 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.075571 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.113372 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:10:19 crc kubenswrapper[4970]: E1209 12:10:19.141360 4970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f8ad35033174e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:10:19.140781902 +0000 UTC m=+231.701262963,LastTimestamp:2025-12-09 12:10:19.140781902 +0000 UTC m=+231.701262963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.810671 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098"} Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.810998 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6755004e93a9ee9e958466936d72169f254584fac7521645fa793944438e38b0"} Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.812765 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.814474 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.815655 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.816287 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44" exitCode=0 Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.816334 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039" exitCode=0 Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.816348 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7" exitCode=0 Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.816362 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379" exitCode=2 Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.818701 4970 generic.go:334] "Generic (PLEG): container finished" podID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" containerID="b2208118fbc7b39d5509fb2bd389aebdc3f7710e116221b3780df6a27ca108c0" exitCode=0 Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.819544 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f94c63f8-ba2b-46a1-b6c8-a018ff78e407","Type":"ContainerDied","Data":"b2208118fbc7b39d5509fb2bd389aebdc3f7710e116221b3780df6a27ca108c0"} Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.819601 4970 scope.go:117] "RemoveContainer" containerID="ed9af4c4df55f981f8da7ed94cd91fd62985a634697e73822be8ec59cd22fccc" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.820429 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:19 crc kubenswrapper[4970]: I1209 12:10:19.820629 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.735276 4970 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.735794 4970 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.736063 4970 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.736305 4970 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.736518 4970 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:20 crc kubenswrapper[4970]: I1209 12:10:20.736553 4970 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.736752 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="200ms" Dec 09 12:10:20 crc kubenswrapper[4970]: I1209 12:10:20.825213 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:10:20 crc kubenswrapper[4970]: E1209 12:10:20.937909 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="400ms" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.197423 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.197922 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.198269 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.301409 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-var-lock\") pod \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.301522 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kube-api-access\") pod \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.301645 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-var-lock" (OuterVolumeSpecName: "var-lock") pod "f94c63f8-ba2b-46a1-b6c8-a018ff78e407" (UID: "f94c63f8-ba2b-46a1-b6c8-a018ff78e407"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.302522 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kubelet-dir\") pod \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\" (UID: \"f94c63f8-ba2b-46a1-b6c8-a018ff78e407\") " Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.302549 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f94c63f8-ba2b-46a1-b6c8-a018ff78e407" (UID: "f94c63f8-ba2b-46a1-b6c8-a018ff78e407"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.302777 4970 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.302829 4970 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.306110 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f94c63f8-ba2b-46a1-b6c8-a018ff78e407" (UID: "f94c63f8-ba2b-46a1-b6c8-a018ff78e407"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.340045 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="800ms" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.404425 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f94c63f8-ba2b-46a1-b6c8-a018ff78e407-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.663231 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.664191 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.664844 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.665308 4970 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.665756 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.809145 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.809474 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.809627 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.809827 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.809828 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.809864 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.822895 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.833023 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f94c63f8-ba2b-46a1-b6c8-a018ff78e407","Type":"ContainerDied","Data":"7c08318fc66cdbeaf17b403815ff512d5d6114657c5c8e976184b116fd57bd90"} Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.833061 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c08318fc66cdbeaf17b403815ff512d5d6114657c5c8e976184b116fd57bd90" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.833035 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.835729 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.836443 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d" exitCode=0 Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.836503 4970 scope.go:117] "RemoveContainer" containerID="c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.836525 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.837008 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.837488 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.837750 4970 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.851454 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.852169 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.852733 4970 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.855750 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.856090 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.856650 4970 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.861230 4970 scope.go:117] "RemoveContainer" containerID="ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.877598 4970 scope.go:117] "RemoveContainer" containerID="f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.899608 4970 scope.go:117] "RemoveContainer" containerID="1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.911579 4970 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.911603 4970 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.911612 4970 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.913376 4970 scope.go:117] "RemoveContainer" containerID="1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.927369 4970 scope.go:117] "RemoveContainer" containerID="4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.945609 4970 scope.go:117] "RemoveContainer" containerID="c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.946142 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\": container with ID starting with c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44 not found: ID does not exist" containerID="c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.946185 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44"} err="failed to get container status \"c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\": rpc error: code = NotFound desc = could not find container \"c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44\": container with ID starting with c4d26ef743080653b25eb47195975af570c225017a6fdd5f8f18ff8f1fe9eb44 not found: ID does not exist" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.946210 4970 scope.go:117] "RemoveContainer" containerID="ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.946723 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\": container with ID starting with ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039 not found: ID does not exist" containerID="ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.946758 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039"} err="failed to get container status \"ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\": rpc error: code = NotFound desc = could not find container \"ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039\": container with ID starting with ab76cb352c45eee46d31ddb4f0c4bd84de13cb0bbcd54c766c73d53f41bfd039 not found: ID does not exist" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.946780 4970 scope.go:117] "RemoveContainer" containerID="f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.947144 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\": container with ID starting with f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7 not found: ID does not exist" containerID="f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.947175 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7"} err="failed to get container status \"f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\": rpc error: code = NotFound desc = could not find container \"f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7\": container with ID starting with f4378e7dd8157ed2a9c7b3a3d3264eb2de07101314f877f7442a0d62a13e9cf7 not found: ID does not exist" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.947192 4970 scope.go:117] "RemoveContainer" containerID="1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.947449 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\": container with ID starting with 1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379 not found: ID does not exist" containerID="1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.947477 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379"} err="failed to get container status \"1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\": rpc error: code = NotFound desc = could not find container \"1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379\": container with ID starting with 1cf7dece1e9667fab936d49070f2c45168dba08e386f2fe68b76b4f1e5d4d379 not found: ID does not exist" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.947494 4970 scope.go:117] "RemoveContainer" containerID="1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.947724 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\": container with ID starting with 1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d not found: ID does not exist" containerID="1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.947754 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d"} err="failed to get container status \"1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\": rpc error: code = NotFound desc = could not find container \"1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d\": container with ID starting with 1a8c40e07c97355be8f0a8e42bbd492cfea1a389eaaf885c2baf9682717fc58d not found: ID does not exist" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.947774 4970 scope.go:117] "RemoveContainer" containerID="4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e" Dec 09 12:10:21 crc kubenswrapper[4970]: E1209 12:10:21.948032 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\": container with ID starting with 4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e not found: ID does not exist" containerID="4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e" Dec 09 12:10:21 crc kubenswrapper[4970]: I1209 12:10:21.948061 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e"} err="failed to get container status \"4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\": rpc error: code = NotFound desc = could not find container \"4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e\": container with ID starting with 4cead47969c7f49519233d63fabe6a69c5f50613b4f6d1b87e484ac92b59e83e not found: ID does not exist" Dec 09 12:10:22 crc kubenswrapper[4970]: E1209 12:10:22.141801 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="1.6s" Dec 09 12:10:23 crc kubenswrapper[4970]: E1209 12:10:23.424864 4970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f8ad35033174e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:10:19.140781902 +0000 UTC m=+231.701262963,LastTimestamp:2025-12-09 12:10:19.140781902 +0000 UTC m=+231.701262963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:10:23 crc kubenswrapper[4970]: E1209 12:10:23.743484 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="3.2s" Dec 09 12:10:24 crc kubenswrapper[4970]: E1209 12:10:24.854341 4970 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" volumeName="registry-storage" Dec 09 12:10:26 crc kubenswrapper[4970]: E1209 12:10:26.944758 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="6.4s" Dec 09 12:10:27 crc kubenswrapper[4970]: I1209 12:10:27.814966 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:27 crc kubenswrapper[4970]: I1209 12:10:27.815598 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.812183 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.814136 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.814774 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.826834 4970 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.826898 4970 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:32 crc kubenswrapper[4970]: E1209 12:10:32.827532 4970 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.828292 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:32 crc kubenswrapper[4970]: W1209 12:10:32.859330 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-4a1ddd15faefd38fbec739cfe1ac421de89ff4e41b7282b2bc5cdfd8e52330e0 WatchSource:0}: Error finding container 4a1ddd15faefd38fbec739cfe1ac421de89ff4e41b7282b2bc5cdfd8e52330e0: Status 404 returned error can't find the container with id 4a1ddd15faefd38fbec739cfe1ac421de89ff4e41b7282b2bc5cdfd8e52330e0 Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.895188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4a1ddd15faefd38fbec739cfe1ac421de89ff4e41b7282b2bc5cdfd8e52330e0"} Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.906717 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.906777 4970 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7" exitCode=1 Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.906812 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7"} Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.907540 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.907758 4970 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.907817 4970 scope.go:117] "RemoveContainer" containerID="3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7" Dec 09 12:10:32 crc kubenswrapper[4970]: I1209 12:10:32.907975 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:33 crc kubenswrapper[4970]: E1209 12:10:33.345845 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.245:6443: connect: connection refused" interval="7s" Dec 09 12:10:33 crc kubenswrapper[4970]: E1209 12:10:33.425849 4970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f8ad35033174e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 12:10:19.140781902 +0000 UTC m=+231.701262963,LastTimestamp:2025-12-09 12:10:19.140781902 +0000 UTC m=+231.701262963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.916305 4970 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b7516b4803e5e96c013d6828bab3a22c4805d6e1bc692eb5325faae8c2574e4c" exitCode=0 Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.916350 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b7516b4803e5e96c013d6828bab3a22c4805d6e1bc692eb5325faae8c2574e4c"} Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.916592 4970 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.917725 4970 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.916988 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.918023 4970 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:33 crc kubenswrapper[4970]: E1209 12:10:33.918125 4970 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.918266 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.920313 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.920355 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"750c0fda55774e1cde9d20289ee127b99ed657cfac333357fe74df0b5072ffb2"} Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.920983 4970 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.921784 4970 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:33 crc kubenswrapper[4970]: I1209 12:10:33.922046 4970 status_manager.go:851] "Failed to get status for pod" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.245:6443: connect: connection refused" Dec 09 12:10:34 crc kubenswrapper[4970]: I1209 12:10:34.941637 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"98e7810a51da56b4bc7e3c2dc388d771e433b9c9e82add78cbb896ae027aacbf"} Dec 09 12:10:34 crc kubenswrapper[4970]: I1209 12:10:34.941986 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"519d18164edb7c2e9a1b15995d0fff48ae8f897981cd2600bc793771e807ba1b"} Dec 09 12:10:34 crc kubenswrapper[4970]: I1209 12:10:34.942002 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3a2dd42c10f3c18f06b62ebf40b3780b0ad3fe90f15fab51ebce02251be29ca6"} Dec 09 12:10:34 crc kubenswrapper[4970]: I1209 12:10:34.942053 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"14f650767cfc67743b29fa61dc9d2d6ece26001e13d4ed287461538e6a450ef0"} Dec 09 12:10:35 crc kubenswrapper[4970]: I1209 12:10:35.948449 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e4a6a84f568d4fed7fab20fdcd57c620dd5b82c42826693712f118d3cf659775"} Dec 09 12:10:35 crc kubenswrapper[4970]: I1209 12:10:35.948689 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:35 crc kubenswrapper[4970]: I1209 12:10:35.948726 4970 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:35 crc kubenswrapper[4970]: I1209 12:10:35.948743 4970 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:37 crc kubenswrapper[4970]: I1209 12:10:37.828566 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:37 crc kubenswrapper[4970]: I1209 12:10:37.829338 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:37 crc kubenswrapper[4970]: I1209 12:10:37.834325 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:40 crc kubenswrapper[4970]: I1209 12:10:40.959926 4970 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.027518 4970 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7a554374-c98f-4e8f-b281-7224303ff83d" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.815631 4970 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.815977 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.820068 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.976934 4970 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.976963 4970 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.982609 4970 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7a554374-c98f-4e8f-b281-7224303ff83d" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.983112 4970 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://14f650767cfc67743b29fa61dc9d2d6ece26001e13d4ed287461538e6a450ef0" Dec 09 12:10:41 crc kubenswrapper[4970]: I1209 12:10:41.983155 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:42 crc kubenswrapper[4970]: I1209 12:10:42.087725 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:10:42 crc kubenswrapper[4970]: I1209 12:10:42.983967 4970 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:42 crc kubenswrapper[4970]: I1209 12:10:42.984027 4970 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="860af16f-74fe-4d2e-95fd-dc63a1975528" Dec 09 12:10:42 crc kubenswrapper[4970]: I1209 12:10:42.990283 4970 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7a554374-c98f-4e8f-b281-7224303ff83d" Dec 09 12:10:49 crc kubenswrapper[4970]: I1209 12:10:49.982369 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 12:10:50 crc kubenswrapper[4970]: I1209 12:10:50.103604 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 12:10:50 crc kubenswrapper[4970]: I1209 12:10:50.363512 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 12:10:50 crc kubenswrapper[4970]: I1209 12:10:50.393447 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 12:10:50 crc kubenswrapper[4970]: I1209 12:10:50.898598 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 12:10:51 crc kubenswrapper[4970]: I1209 12:10:51.070674 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 12:10:51 crc kubenswrapper[4970]: I1209 12:10:51.073308 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 12:10:51 crc kubenswrapper[4970]: I1209 12:10:51.788106 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 12:10:51 crc kubenswrapper[4970]: I1209 12:10:51.799473 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 12:10:51 crc kubenswrapper[4970]: I1209 12:10:51.815943 4970 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 12:10:51 crc kubenswrapper[4970]: I1209 12:10:51.816007 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.063877 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.075824 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.098696 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.220690 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.552192 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.722982 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.754346 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 12:10:52 crc kubenswrapper[4970]: I1209 12:10:52.943109 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.192459 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.254926 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.275801 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.318313 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.389294 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.400431 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.518078 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.648583 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.830921 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.929675 4970 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.932915 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.935889 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.936455 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.936431927 podStartE2EDuration="35.936431927s" podCreationTimestamp="2025-12-09 12:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:10:40.971944983 +0000 UTC m=+253.532426054" watchObservedRunningTime="2025-12-09 12:10:53.936431927 +0000 UTC m=+266.496912988" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.938988 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.939085 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.942789 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.960756 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.960736282 podStartE2EDuration="13.960736282s" podCreationTimestamp="2025-12-09 12:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:10:53.960377041 +0000 UTC m=+266.520858092" watchObservedRunningTime="2025-12-09 12:10:53.960736282 +0000 UTC m=+266.521217353" Dec 09 12:10:53 crc kubenswrapper[4970]: I1209 12:10:53.990964 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.210811 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.280693 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.327119 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.332351 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.343735 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.382500 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.402236 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.556764 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.578474 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.635891 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.727521 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.771637 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.771946 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.809926 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.868239 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.916335 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 12:10:54 crc kubenswrapper[4970]: I1209 12:10:54.995719 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.017509 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.209831 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.350555 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.628415 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.675948 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.710092 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.836243 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 12:10:55 crc kubenswrapper[4970]: I1209 12:10:55.889221 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.247987 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.295669 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.448899 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.460879 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.463420 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.471796 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.484424 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.493918 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.588528 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.594894 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.611960 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.616748 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.639387 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.687466 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.857379 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.876433 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.877175 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 12:10:56 crc kubenswrapper[4970]: I1209 12:10:56.936768 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.055073 4970 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.055178 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.175239 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.238940 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.272574 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.325196 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.352929 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.371299 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.452471 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.452518 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.456698 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.460890 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.494322 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.498826 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.554846 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.635988 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.710390 4970 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.765949 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.776724 4970 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.843052 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.885002 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.890597 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.892631 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 12:10:57 crc kubenswrapper[4970]: I1209 12:10:57.904203 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.040519 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.108829 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.109393 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.198072 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.335446 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.356327 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.366976 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.394386 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.437005 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.452022 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.501681 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.704827 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.760778 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.805165 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.874779 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.896425 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.898579 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 12:10:58 crc kubenswrapper[4970]: I1209 12:10:58.993299 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.002163 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.030292 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.127164 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.140922 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.185900 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.207058 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.241881 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.300693 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.300904 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.323859 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.382856 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.507357 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.533279 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.712207 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.767278 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.793922 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.829446 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 12:10:59 crc kubenswrapper[4970]: I1209 12:10:59.992745 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.043105 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.043494 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.138276 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.172338 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.234828 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.272787 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.309033 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.377823 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.381657 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.386687 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.386693 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.424280 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.437772 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.487382 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.604127 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.643565 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.785808 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.812473 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.864645 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.884623 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 12:11:00 crc kubenswrapper[4970]: I1209 12:11:00.920098 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.000214 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.014712 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.023903 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.172658 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.293954 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.316963 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.328276 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.434156 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.526054 4970 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.598401 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.610937 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.625956 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.636340 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.682576 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.717538 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.737934 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.756624 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.815753 4970 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.815828 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.820571 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.821049 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"750c0fda55774e1cde9d20289ee127b99ed657cfac333357fe74df0b5072ffb2"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.821172 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://750c0fda55774e1cde9d20289ee127b99ed657cfac333357fe74df0b5072ffb2" gracePeriod=30 Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.843066 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.917311 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.948716 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 12:11:01 crc kubenswrapper[4970]: I1209 12:11:01.976636 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.020414 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.100577 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.103220 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.160541 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.165488 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.337177 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.368099 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.537534 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.544293 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.647778 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.819202 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.830370 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.890554 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 12:11:02 crc kubenswrapper[4970]: I1209 12:11:02.893880 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.026886 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.179668 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.202730 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.246009 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.259030 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.276531 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.348492 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.369863 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.369923 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.401340 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.421432 4970 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.421909 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098" gracePeriod=5 Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.558712 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.565003 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.572307 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.752087 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.866981 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 12:11:03 crc kubenswrapper[4970]: I1209 12:11:03.877377 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.023232 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.047051 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.257421 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.281443 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.312902 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.345097 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.400193 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.422665 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.457365 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.561210 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.572385 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.658442 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.678948 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.708237 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.757043 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.758063 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.785882 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.800989 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.860482 4970 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 12:11:04 crc kubenswrapper[4970]: I1209 12:11:04.974524 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.086739 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.221726 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.237364 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.256324 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.341230 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.365647 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.420435 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.450940 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.459221 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.581731 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.642953 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.741463 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.774575 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.927628 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 12:11:05 crc kubenswrapper[4970]: I1209 12:11:05.954602 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 12:11:06 crc kubenswrapper[4970]: I1209 12:11:06.195206 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 12:11:06 crc kubenswrapper[4970]: I1209 12:11:06.210665 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 12:11:06 crc kubenswrapper[4970]: I1209 12:11:06.249118 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 12:11:06 crc kubenswrapper[4970]: I1209 12:11:06.272074 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 12:11:06 crc kubenswrapper[4970]: I1209 12:11:06.595457 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 12:11:07 crc kubenswrapper[4970]: I1209 12:11:07.082788 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 12:11:07 crc kubenswrapper[4970]: I1209 12:11:07.275262 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 12:11:07 crc kubenswrapper[4970]: I1209 12:11:07.279155 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 12:11:07 crc kubenswrapper[4970]: I1209 12:11:07.437243 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.383354 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcxnq"] Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.385793 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qcxnq" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="registry-server" containerID="cri-o://5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7" gracePeriod=30 Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.397662 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwchd"] Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.398066 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vwchd" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="registry-server" containerID="cri-o://1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5" gracePeriod=30 Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.411416 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xgfp5"] Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.411668 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" podUID="2becf945-4a01-49fd-a2c5-788632898a32" containerName="marketplace-operator" containerID="cri-o://bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8" gracePeriod=30 Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.418462 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz6sg"] Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.418712 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qz6sg" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="registry-server" containerID="cri-o://f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee" gracePeriod=30 Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.423106 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlxn4"] Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.423322 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dlxn4" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="registry-server" containerID="cri-o://664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e" gracePeriod=30 Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.966625 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.966941 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.972821 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.978599 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.986371 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:11:08 crc kubenswrapper[4970]: I1209 12:11:08.995807 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.067612 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.144952 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.144983 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145017 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-utilities\") pod \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145037 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-catalog-content\") pod \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145052 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145091 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-catalog-content\") pod \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145111 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vfdh\" (UniqueName: \"kubernetes.io/projected/343921a4-b33b-485d-b5ac-84d3d87b6ed7-kube-api-access-5vfdh\") pod \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145113 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145142 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-utilities\") pod \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145184 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8cnm\" (UniqueName: \"kubernetes.io/projected/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-kube-api-access-m8cnm\") pod \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145210 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-utilities\") pod \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\" (UID: \"0119c7a7-a4a5-4364-9682-6de2fcd6f02f\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145225 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-catalog-content\") pod \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\" (UID: \"343921a4-b33b-485d-b5ac-84d3d87b6ed7\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145257 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-utilities\") pod \"75d4b427-7364-457e-b599-b36ff9459935\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145274 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-catalog-content\") pod \"75d4b427-7364-457e-b599-b36ff9459935\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145292 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjztv\" (UniqueName: \"kubernetes.io/projected/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-kube-api-access-kjztv\") pod \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\" (UID: \"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145309 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145326 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p47k\" (UniqueName: \"kubernetes.io/projected/75d4b427-7364-457e-b599-b36ff9459935-kube-api-access-9p47k\") pod \"75d4b427-7364-457e-b599-b36ff9459935\" (UID: \"75d4b427-7364-457e-b599-b36ff9459935\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145345 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145405 4970 generic.go:334] "Generic (PLEG): container finished" podID="2becf945-4a01-49fd-a2c5-788632898a32" containerID="bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8" exitCode=0 Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145529 4970 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145567 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145595 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145735 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" event={"ID":"2becf945-4a01-49fd-a2c5-788632898a32","Type":"ContainerDied","Data":"bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145802 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" event={"ID":"2becf945-4a01-49fd-a2c5-788632898a32","Type":"ContainerDied","Data":"9650a394d3a425e2011c9b90eb40b29ee4d33f59ffc16b2d861b629343f43b7a"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.145830 4970 scope.go:117] "RemoveContainer" containerID="bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.146235 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-utilities" (OuterVolumeSpecName: "utilities") pod "0119c7a7-a4a5-4364-9682-6de2fcd6f02f" (UID: "0119c7a7-a4a5-4364-9682-6de2fcd6f02f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.146293 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-utilities" (OuterVolumeSpecName: "utilities") pod "343921a4-b33b-485d-b5ac-84d3d87b6ed7" (UID: "343921a4-b33b-485d-b5ac-84d3d87b6ed7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.146680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-utilities" (OuterVolumeSpecName: "utilities") pod "75d4b427-7364-457e-b599-b36ff9459935" (UID: "75d4b427-7364-457e-b599-b36ff9459935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.146956 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-utilities" (OuterVolumeSpecName: "utilities") pod "9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" (UID: "9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.147134 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-xgfp5" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.150493 4970 generic.go:334] "Generic (PLEG): container finished" podID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerID="1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5" exitCode=0 Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.150575 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwchd" event={"ID":"0119c7a7-a4a5-4364-9682-6de2fcd6f02f","Type":"ContainerDied","Data":"1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.150609 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vwchd" event={"ID":"0119c7a7-a4a5-4364-9682-6de2fcd6f02f","Type":"ContainerDied","Data":"484e925ef2f22bb1f1f6617924dcc24d3294cb465c285f96a97915213b959392"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.150655 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vwchd" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.151366 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-kube-api-access-kjztv" (OuterVolumeSpecName: "kube-api-access-kjztv") pod "9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" (UID: "9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16"). InnerVolumeSpecName "kube-api-access-kjztv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.152048 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343921a4-b33b-485d-b5ac-84d3d87b6ed7-kube-api-access-5vfdh" (OuterVolumeSpecName: "kube-api-access-5vfdh") pod "343921a4-b33b-485d-b5ac-84d3d87b6ed7" (UID: "343921a4-b33b-485d-b5ac-84d3d87b6ed7"). InnerVolumeSpecName "kube-api-access-5vfdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.153050 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-kube-api-access-m8cnm" (OuterVolumeSpecName: "kube-api-access-m8cnm") pod "0119c7a7-a4a5-4364-9682-6de2fcd6f02f" (UID: "0119c7a7-a4a5-4364-9682-6de2fcd6f02f"). InnerVolumeSpecName "kube-api-access-m8cnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.153135 4970 generic.go:334] "Generic (PLEG): container finished" podID="75d4b427-7364-457e-b599-b36ff9459935" containerID="f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee" exitCode=0 Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.153199 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz6sg" event={"ID":"75d4b427-7364-457e-b599-b36ff9459935","Type":"ContainerDied","Data":"f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.153226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qz6sg" event={"ID":"75d4b427-7364-457e-b599-b36ff9459935","Type":"ContainerDied","Data":"c2f165c449c0bfaa77cd754d9d8693d837962e98f21ffb05f64b79f596ef67ed"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.153235 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qz6sg" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.154802 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.155525 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.156428 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerID="664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e" exitCode=0 Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.156549 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dlxn4" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.156771 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerDied","Data":"664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.156800 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dlxn4" event={"ID":"9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16","Type":"ContainerDied","Data":"990167994ddaf4d99ff0c38796c7bc2d3649b341f928ba16d90a66b794069499"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.168682 4970 scope.go:117] "RemoveContainer" containerID="bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.169508 4970 generic.go:334] "Generic (PLEG): container finished" podID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerID="5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7" exitCode=0 Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.169607 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerDied","Data":"5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.169635 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcxnq" event={"ID":"343921a4-b33b-485d-b5ac-84d3d87b6ed7","Type":"ContainerDied","Data":"154491ea8e785a8587354fc33cb1ae78c8b78c9cd15746a56bf8ecb7d60228d8"} Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.169735 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcxnq" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.170341 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8\": container with ID starting with bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8 not found: ID does not exist" containerID="bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.170399 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8"} err="failed to get container status \"bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8\": rpc error: code = NotFound desc = could not find container \"bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8\": container with ID starting with bddc03024091d02cf2042ffd6e310e23c74a2a60246007bdb7e577fbd456bfd8 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.170447 4970 scope.go:117] "RemoveContainer" containerID="1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.173149 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.173190 4970 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098" exitCode=137 Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.173293 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.177319 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75d4b427-7364-457e-b599-b36ff9459935" (UID: "75d4b427-7364-457e-b599-b36ff9459935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.181602 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75d4b427-7364-457e-b599-b36ff9459935-kube-api-access-9p47k" (OuterVolumeSpecName: "kube-api-access-9p47k") pod "75d4b427-7364-457e-b599-b36ff9459935" (UID: "75d4b427-7364-457e-b599-b36ff9459935"). InnerVolumeSpecName "kube-api-access-9p47k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.186400 4970 scope.go:117] "RemoveContainer" containerID="b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.202777 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "343921a4-b33b-485d-b5ac-84d3d87b6ed7" (UID: "343921a4-b33b-485d-b5ac-84d3d87b6ed7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.206443 4970 scope.go:117] "RemoveContainer" containerID="0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.213454 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0119c7a7-a4a5-4364-9682-6de2fcd6f02f" (UID: "0119c7a7-a4a5-4364-9682-6de2fcd6f02f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.218228 4970 scope.go:117] "RemoveContainer" containerID="1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.218708 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5\": container with ID starting with 1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5 not found: ID does not exist" containerID="1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.218758 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5"} err="failed to get container status \"1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5\": rpc error: code = NotFound desc = could not find container \"1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5\": container with ID starting with 1ba5d8c6636492def5b64185adb47c74e659abb4604f5eca81abb57e636b8dc5 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.218787 4970 scope.go:117] "RemoveContainer" containerID="b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.219124 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef\": container with ID starting with b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef not found: ID does not exist" containerID="b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.219141 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef"} err="failed to get container status \"b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef\": rpc error: code = NotFound desc = could not find container \"b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef\": container with ID starting with b5ef0d455b9896b3e3b8d626fdc36b9d08de85c708c5ad9d41d79825ea834fef not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.219154 4970 scope.go:117] "RemoveContainer" containerID="0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.219404 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f\": container with ID starting with 0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f not found: ID does not exist" containerID="0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.219432 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f"} err="failed to get container status \"0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f\": rpc error: code = NotFound desc = could not find container \"0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f\": container with ID starting with 0682b6a20f75fa295e780a825768342d96c8e162de745c4c0212625a7b72e78f not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.219454 4970 scope.go:117] "RemoveContainer" containerID="f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.231970 4970 scope.go:117] "RemoveContainer" containerID="6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.243423 4970 scope.go:117] "RemoveContainer" containerID="03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.245992 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csc7n\" (UniqueName: \"kubernetes.io/projected/2becf945-4a01-49fd-a2c5-788632898a32-kube-api-access-csc7n\") pod \"2becf945-4a01-49fd-a2c5-788632898a32\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246067 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics\") pod \"2becf945-4a01-49fd-a2c5-788632898a32\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246143 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca\") pod \"2becf945-4a01-49fd-a2c5-788632898a32\" (UID: \"2becf945-4a01-49fd-a2c5-788632898a32\") " Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246547 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8cnm\" (UniqueName: \"kubernetes.io/projected/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-kube-api-access-m8cnm\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246569 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246584 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246639 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246654 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75d4b427-7364-457e-b599-b36ff9459935-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246666 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjztv\" (UniqueName: \"kubernetes.io/projected/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-kube-api-access-kjztv\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246705 4970 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246719 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p47k\" (UniqueName: \"kubernetes.io/projected/75d4b427-7364-457e-b599-b36ff9459935-kube-api-access-9p47k\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246731 4970 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246744 4970 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246754 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246793 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0119c7a7-a4a5-4364-9682-6de2fcd6f02f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246804 4970 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246816 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vfdh\" (UniqueName: \"kubernetes.io/projected/343921a4-b33b-485d-b5ac-84d3d87b6ed7-kube-api-access-5vfdh\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246828 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/343921a4-b33b-485d-b5ac-84d3d87b6ed7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.246956 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "2becf945-4a01-49fd-a2c5-788632898a32" (UID: "2becf945-4a01-49fd-a2c5-788632898a32"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.249863 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "2becf945-4a01-49fd-a2c5-788632898a32" (UID: "2becf945-4a01-49fd-a2c5-788632898a32"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.250162 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2becf945-4a01-49fd-a2c5-788632898a32-kube-api-access-csc7n" (OuterVolumeSpecName: "kube-api-access-csc7n") pod "2becf945-4a01-49fd-a2c5-788632898a32" (UID: "2becf945-4a01-49fd-a2c5-788632898a32"). InnerVolumeSpecName "kube-api-access-csc7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.256521 4970 scope.go:117] "RemoveContainer" containerID="f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.256935 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee\": container with ID starting with f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee not found: ID does not exist" containerID="f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.256964 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee"} err="failed to get container status \"f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee\": rpc error: code = NotFound desc = could not find container \"f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee\": container with ID starting with f4c8c7a131779e85d559b6e995d98ae3bebfe699e5515acf305cb048433396ee not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.256987 4970 scope.go:117] "RemoveContainer" containerID="6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.257305 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87\": container with ID starting with 6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87 not found: ID does not exist" containerID="6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.257327 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87"} err="failed to get container status \"6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87\": rpc error: code = NotFound desc = could not find container \"6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87\": container with ID starting with 6e382012c33729ab58b97e9fbfe2758a8e511713657c1c37f0eb6946198cfc87 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.257343 4970 scope.go:117] "RemoveContainer" containerID="03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.257542 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59\": container with ID starting with 03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59 not found: ID does not exist" containerID="03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.257581 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59"} err="failed to get container status \"03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59\": rpc error: code = NotFound desc = could not find container \"03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59\": container with ID starting with 03f2c648fc44069af27a8ca2317a2460b6a335b5c0b681828a5b838b9eaaeb59 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.257594 4970 scope.go:117] "RemoveContainer" containerID="664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.268797 4970 scope.go:117] "RemoveContainer" containerID="6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.272130 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" (UID: "9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.281853 4970 scope.go:117] "RemoveContainer" containerID="e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.297309 4970 scope.go:117] "RemoveContainer" containerID="664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.297881 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e\": container with ID starting with 664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e not found: ID does not exist" containerID="664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.297915 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e"} err="failed to get container status \"664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e\": rpc error: code = NotFound desc = could not find container \"664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e\": container with ID starting with 664beba183b13534c38baabc0df848fd584ed5bdaea25576a93a4053af745f8e not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.297940 4970 scope.go:117] "RemoveContainer" containerID="6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.298441 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0\": container with ID starting with 6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0 not found: ID does not exist" containerID="6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.298750 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0"} err="failed to get container status \"6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0\": rpc error: code = NotFound desc = could not find container \"6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0\": container with ID starting with 6dc4442819e491e911ad7f0aa773d3136351b5562b660cc0296730d218f27ac0 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.298904 4970 scope.go:117] "RemoveContainer" containerID="e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.299422 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5\": container with ID starting with e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5 not found: ID does not exist" containerID="e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.299515 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5"} err="failed to get container status \"e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5\": rpc error: code = NotFound desc = could not find container \"e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5\": container with ID starting with e444cd8ab79c9d4995fa76bf468e7ea476d054d19d632eb825430063d9b7bac5 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.299589 4970 scope.go:117] "RemoveContainer" containerID="5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.312342 4970 scope.go:117] "RemoveContainer" containerID="2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.326694 4970 scope.go:117] "RemoveContainer" containerID="7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.346569 4970 scope.go:117] "RemoveContainer" containerID="5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.346966 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7\": container with ID starting with 5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7 not found: ID does not exist" containerID="5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.346996 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7"} err="failed to get container status \"5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7\": rpc error: code = NotFound desc = could not find container \"5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7\": container with ID starting with 5535af8ee5c11a1ead0d5d023ad54beb7588abe043aaf70f09e024fe0bfc2cb7 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347021 4970 scope.go:117] "RemoveContainer" containerID="2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347657 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csc7n\" (UniqueName: \"kubernetes.io/projected/2becf945-4a01-49fd-a2c5-788632898a32-kube-api-access-csc7n\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347698 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347717 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347735 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2becf945-4a01-49fd-a2c5-788632898a32-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.347746 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71\": container with ID starting with 2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71 not found: ID does not exist" containerID="2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347858 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71"} err="failed to get container status \"2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71\": rpc error: code = NotFound desc = could not find container \"2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71\": container with ID starting with 2c88af4619f77c8be558096f01cf0b98384bda9ba44e21615994ce890b247a71 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.347943 4970 scope.go:117] "RemoveContainer" containerID="7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.348585 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769\": container with ID starting with 7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769 not found: ID does not exist" containerID="7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.348613 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769"} err="failed to get container status \"7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769\": rpc error: code = NotFound desc = could not find container \"7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769\": container with ID starting with 7bcc05d1948af17c06f514216866a769889f6c65c887c632daeb8c3dc6cf2769 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.348634 4970 scope.go:117] "RemoveContainer" containerID="37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.367736 4970 scope.go:117] "RemoveContainer" containerID="37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098" Dec 09 12:11:09 crc kubenswrapper[4970]: E1209 12:11:09.368340 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098\": container with ID starting with 37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098 not found: ID does not exist" containerID="37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.368405 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098"} err="failed to get container status \"37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098\": rpc error: code = NotFound desc = could not find container \"37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098\": container with ID starting with 37aab8e7b5fa5fd799b2c36740359df83d8734908c329c2a3b9451241db3a098 not found: ID does not exist" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.477365 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xgfp5"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.482762 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-xgfp5"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.491823 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz6sg"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.496216 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qz6sg"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.500709 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vwchd"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.503971 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vwchd"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.513357 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dlxn4"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.518036 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dlxn4"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.522798 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcxnq"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.525491 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qcxnq"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.819692 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" path="/var/lib/kubelet/pods/0119c7a7-a4a5-4364-9682-6de2fcd6f02f/volumes" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.820397 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2becf945-4a01-49fd-a2c5-788632898a32" path="/var/lib/kubelet/pods/2becf945-4a01-49fd-a2c5-788632898a32/volumes" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.820859 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" path="/var/lib/kubelet/pods/343921a4-b33b-485d-b5ac-84d3d87b6ed7/volumes" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.821900 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75d4b427-7364-457e-b599-b36ff9459935" path="/var/lib/kubelet/pods/75d4b427-7364-457e-b599-b36ff9459935/volumes" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.822508 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" path="/var/lib/kubelet/pods/9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16/volumes" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.823408 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.823743 4970 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.832824 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.832859 4970 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="16deb13e-210f-494b-9ec2-85949161ce86" Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.835537 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 12:11:09 crc kubenswrapper[4970]: I1209 12:11:09.835562 4970 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="16deb13e-210f-494b-9ec2-85949161ce86" Dec 09 12:11:16 crc kubenswrapper[4970]: I1209 12:11:16.227291 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 12:11:31 crc kubenswrapper[4970]: I1209 12:11:31.493740 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 12:11:32 crc kubenswrapper[4970]: I1209 12:11:32.297471 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 09 12:11:32 crc kubenswrapper[4970]: I1209 12:11:32.299983 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 12:11:32 crc kubenswrapper[4970]: I1209 12:11:32.300085 4970 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="750c0fda55774e1cde9d20289ee127b99ed657cfac333357fe74df0b5072ffb2" exitCode=137 Dec 09 12:11:32 crc kubenswrapper[4970]: I1209 12:11:32.300137 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"750c0fda55774e1cde9d20289ee127b99ed657cfac333357fe74df0b5072ffb2"} Dec 09 12:11:32 crc kubenswrapper[4970]: I1209 12:11:32.300304 4970 scope.go:117] "RemoveContainer" containerID="3c49e3e927029a065c73639bc97b03eded3857a4cf4a97a17b0a383825b80cc7" Dec 09 12:11:33 crc kubenswrapper[4970]: I1209 12:11:33.305282 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 09 12:11:33 crc kubenswrapper[4970]: I1209 12:11:33.306916 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4da83fb86ea83705b3779b35cbf18d80f13cdb98c874526b97de984ae7e6b934"} Dec 09 12:11:41 crc kubenswrapper[4970]: I1209 12:11:41.822746 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:11:41 crc kubenswrapper[4970]: I1209 12:11:41.823466 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:11:41 crc kubenswrapper[4970]: I1209 12:11:41.823896 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:11:42 crc kubenswrapper[4970]: I1209 12:11:42.094169 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 12:11:42 crc kubenswrapper[4970]: I1209 12:11:42.125945 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.845391 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhq2r"] Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.845926 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.845941 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.845953 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.845962 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.845971 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.845981 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.845992 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.845999 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846013 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2becf945-4a01-49fd-a2c5-788632898a32" containerName="marketplace-operator" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846020 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2becf945-4a01-49fd-a2c5-788632898a32" containerName="marketplace-operator" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846033 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846040 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846050 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846057 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846069 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" containerName="installer" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846078 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" containerName="installer" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846087 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846096 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846108 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846115 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846125 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846133 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846145 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846152 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846163 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846171 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846182 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846189 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="extract-utilities" Dec 09 12:11:48 crc kubenswrapper[4970]: E1209 12:11:48.846202 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846210 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="extract-content" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846329 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f94c63f8-ba2b-46a1-b6c8-a018ff78e407" containerName="installer" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846346 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2becf945-4a01-49fd-a2c5-788632898a32" containerName="marketplace-operator" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846358 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0119c7a7-a4a5-4364-9682-6de2fcd6f02f" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846368 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="75d4b427-7364-457e-b599-b36ff9459935" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846378 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846388 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="343921a4-b33b-485d-b5ac-84d3d87b6ed7" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846400 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb07bf9-8f0a-48fe-b71b-dba75fd4fe16" containerName="registry-server" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.846804 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.850135 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.850329 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.862207 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.862569 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.895450 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.907536 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhq2r"] Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.966034 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf"] Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.966802 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.990695 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.992293 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.992499 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.992617 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.995354 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.998500 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf"] Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.999353 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/026bb6df-f015-4abc-92e2-81d9452dd101-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.999411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/026bb6df-f015-4abc-92e2-81d9452dd101-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:48 crc kubenswrapper[4970]: I1209 12:11:48.999550 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp6l5\" (UniqueName: \"kubernetes.io/projected/026bb6df-f015-4abc-92e2-81d9452dd101-kube-api-access-kp6l5\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.011304 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bx5d"] Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.011588 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" podUID="1dced0a5-18f4-4cf6-b497-1d8dad926744" containerName="controller-manager" containerID="cri-o://0ff82519db8ea4f9009f1d602f2388d5f5bfbb4357608803f5c661fde8dde93b" gracePeriod=30 Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.028317 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk"] Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.028623 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" podUID="95531044-b9d4-4231-ac7c-be2850f2cbfd" containerName="route-controller-manager" containerID="cri-o://b6532907579e2bad2fa0e66cec8c4a64aeca2c0a46c571583e23be3bbe4c0138" gracePeriod=30 Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.100514 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec8d8438-915d-4023-aa92-09659b107ce9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.100572 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kp6l5\" (UniqueName: \"kubernetes.io/projected/026bb6df-f015-4abc-92e2-81d9452dd101-kube-api-access-kp6l5\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.100610 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/026bb6df-f015-4abc-92e2-81d9452dd101-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.100652 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/026bb6df-f015-4abc-92e2-81d9452dd101-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.100694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec8d8438-915d-4023-aa92-09659b107ce9-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.100718 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zxs\" (UniqueName: \"kubernetes.io/projected/ec8d8438-915d-4023-aa92-09659b107ce9-kube-api-access-d2zxs\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.102317 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/026bb6df-f015-4abc-92e2-81d9452dd101-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.107590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/026bb6df-f015-4abc-92e2-81d9452dd101-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.125390 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp6l5\" (UniqueName: \"kubernetes.io/projected/026bb6df-f015-4abc-92e2-81d9452dd101-kube-api-access-kp6l5\") pod \"marketplace-operator-79b997595-nhq2r\" (UID: \"026bb6df-f015-4abc-92e2-81d9452dd101\") " pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.165533 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.201847 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec8d8438-915d-4023-aa92-09659b107ce9-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.201910 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zxs\" (UniqueName: \"kubernetes.io/projected/ec8d8438-915d-4023-aa92-09659b107ce9-kube-api-access-d2zxs\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.201979 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec8d8438-915d-4023-aa92-09659b107ce9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.203590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/ec8d8438-915d-4023-aa92-09659b107ce9-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.207851 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/ec8d8438-915d-4023-aa92-09659b107ce9-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.224135 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zxs\" (UniqueName: \"kubernetes.io/projected/ec8d8438-915d-4023-aa92-09659b107ce9-kube-api-access-d2zxs\") pod \"cluster-monitoring-operator-6d5b84845-8q6pf\" (UID: \"ec8d8438-915d-4023-aa92-09659b107ce9\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.296757 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.402188 4970 generic.go:334] "Generic (PLEG): container finished" podID="95531044-b9d4-4231-ac7c-be2850f2cbfd" containerID="b6532907579e2bad2fa0e66cec8c4a64aeca2c0a46c571583e23be3bbe4c0138" exitCode=0 Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.403458 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" event={"ID":"95531044-b9d4-4231-ac7c-be2850f2cbfd","Type":"ContainerDied","Data":"b6532907579e2bad2fa0e66cec8c4a64aeca2c0a46c571583e23be3bbe4c0138"} Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.414320 4970 generic.go:334] "Generic (PLEG): container finished" podID="1dced0a5-18f4-4cf6-b497-1d8dad926744" containerID="0ff82519db8ea4f9009f1d602f2388d5f5bfbb4357608803f5c661fde8dde93b" exitCode=0 Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.414364 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" event={"ID":"1dced0a5-18f4-4cf6-b497-1d8dad926744","Type":"ContainerDied","Data":"0ff82519db8ea4f9009f1d602f2388d5f5bfbb4357608803f5c661fde8dde93b"} Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.414568 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.425189 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.429464 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nhq2r"] Dec 09 12:11:49 crc kubenswrapper[4970]: W1209 12:11:49.456145 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod026bb6df_f015_4abc_92e2_81d9452dd101.slice/crio-95df065382342737887508f567270522eafb15b6e3eca8d01398305b10cb4370 WatchSource:0}: Error finding container 95df065382342737887508f567270522eafb15b6e3eca8d01398305b10cb4370: Status 404 returned error can't find the container with id 95df065382342737887508f567270522eafb15b6e3eca8d01398305b10cb4370 Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.507951 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-config\") pod \"95531044-b9d4-4231-ac7c-be2850f2cbfd\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508199 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dced0a5-18f4-4cf6-b497-1d8dad926744-serving-cert\") pod \"1dced0a5-18f4-4cf6-b497-1d8dad926744\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508230 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wbb\" (UniqueName: \"kubernetes.io/projected/95531044-b9d4-4231-ac7c-be2850f2cbfd-kube-api-access-m8wbb\") pod \"95531044-b9d4-4231-ac7c-be2850f2cbfd\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508267 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-client-ca\") pod \"1dced0a5-18f4-4cf6-b497-1d8dad926744\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508288 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-config\") pod \"1dced0a5-18f4-4cf6-b497-1d8dad926744\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508314 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95531044-b9d4-4231-ac7c-be2850f2cbfd-serving-cert\") pod \"95531044-b9d4-4231-ac7c-be2850f2cbfd\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508349 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-client-ca\") pod \"95531044-b9d4-4231-ac7c-be2850f2cbfd\" (UID: \"95531044-b9d4-4231-ac7c-be2850f2cbfd\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508409 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-proxy-ca-bundles\") pod \"1dced0a5-18f4-4cf6-b497-1d8dad926744\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.508432 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99jm7\" (UniqueName: \"kubernetes.io/projected/1dced0a5-18f4-4cf6-b497-1d8dad926744-kube-api-access-99jm7\") pod \"1dced0a5-18f4-4cf6-b497-1d8dad926744\" (UID: \"1dced0a5-18f4-4cf6-b497-1d8dad926744\") " Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.509337 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-client-ca" (OuterVolumeSpecName: "client-ca") pod "95531044-b9d4-4231-ac7c-be2850f2cbfd" (UID: "95531044-b9d4-4231-ac7c-be2850f2cbfd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.509419 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-config" (OuterVolumeSpecName: "config") pod "1dced0a5-18f4-4cf6-b497-1d8dad926744" (UID: "1dced0a5-18f4-4cf6-b497-1d8dad926744"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.509929 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-client-ca" (OuterVolumeSpecName: "client-ca") pod "1dced0a5-18f4-4cf6-b497-1d8dad926744" (UID: "1dced0a5-18f4-4cf6-b497-1d8dad926744"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.510237 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-config" (OuterVolumeSpecName: "config") pod "95531044-b9d4-4231-ac7c-be2850f2cbfd" (UID: "95531044-b9d4-4231-ac7c-be2850f2cbfd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.510316 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1dced0a5-18f4-4cf6-b497-1d8dad926744" (UID: "1dced0a5-18f4-4cf6-b497-1d8dad926744"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.514910 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dced0a5-18f4-4cf6-b497-1d8dad926744-kube-api-access-99jm7" (OuterVolumeSpecName: "kube-api-access-99jm7") pod "1dced0a5-18f4-4cf6-b497-1d8dad926744" (UID: "1dced0a5-18f4-4cf6-b497-1d8dad926744"). InnerVolumeSpecName "kube-api-access-99jm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.521345 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95531044-b9d4-4231-ac7c-be2850f2cbfd-kube-api-access-m8wbb" (OuterVolumeSpecName: "kube-api-access-m8wbb") pod "95531044-b9d4-4231-ac7c-be2850f2cbfd" (UID: "95531044-b9d4-4231-ac7c-be2850f2cbfd"). InnerVolumeSpecName "kube-api-access-m8wbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.523477 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dced0a5-18f4-4cf6-b497-1d8dad926744-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1dced0a5-18f4-4cf6-b497-1d8dad926744" (UID: "1dced0a5-18f4-4cf6-b497-1d8dad926744"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.524536 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95531044-b9d4-4231-ac7c-be2850f2cbfd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95531044-b9d4-4231-ac7c-be2850f2cbfd" (UID: "95531044-b9d4-4231-ac7c-be2850f2cbfd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.546165 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf"] Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610069 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610108 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1dced0a5-18f4-4cf6-b497-1d8dad926744-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610123 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wbb\" (UniqueName: \"kubernetes.io/projected/95531044-b9d4-4231-ac7c-be2850f2cbfd-kube-api-access-m8wbb\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610136 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610147 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610156 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95531044-b9d4-4231-ac7c-be2850f2cbfd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610164 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95531044-b9d4-4231-ac7c-be2850f2cbfd-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610171 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1dced0a5-18f4-4cf6-b497-1d8dad926744-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:49 crc kubenswrapper[4970]: I1209 12:11:49.610179 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99jm7\" (UniqueName: \"kubernetes.io/projected/1dced0a5-18f4-4cf6-b497-1d8dad926744-kube-api-access-99jm7\") on node \"crc\" DevicePath \"\"" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.177528 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6"] Dec 09 12:11:50 crc kubenswrapper[4970]: E1209 12:11:50.177783 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dced0a5-18f4-4cf6-b497-1d8dad926744" containerName="controller-manager" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.177795 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dced0a5-18f4-4cf6-b497-1d8dad926744" containerName="controller-manager" Dec 09 12:11:50 crc kubenswrapper[4970]: E1209 12:11:50.177807 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95531044-b9d4-4231-ac7c-be2850f2cbfd" containerName="route-controller-manager" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.177813 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95531044-b9d4-4231-ac7c-be2850f2cbfd" containerName="route-controller-manager" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.177901 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dced0a5-18f4-4cf6-b497-1d8dad926744" containerName="controller-manager" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.177912 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="95531044-b9d4-4231-ac7c-be2850f2cbfd" containerName="route-controller-manager" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.178368 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.182000 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.182758 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.185929 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.197692 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.217811 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2hf\" (UniqueName: \"kubernetes.io/projected/16eea74f-412f-498f-aed1-56d35233539d-kube-api-access-5q2hf\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.217861 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16eea74f-412f-498f-aed1-56d35233539d-serving-cert\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.217889 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-proxy-ca-bundles\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.217907 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-client-ca\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.218091 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsf44\" (UniqueName: \"kubernetes.io/projected/a11a45bf-b936-463c-ad9c-71bcac4e4532-kube-api-access-zsf44\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.218172 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a11a45bf-b936-463c-ad9c-71bcac4e4532-serving-cert\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.218239 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-config\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.218300 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16eea74f-412f-498f-aed1-56d35233539d-config\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.218325 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16eea74f-412f-498f-aed1-56d35233539d-client-ca\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.319839 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsf44\" (UniqueName: \"kubernetes.io/projected/a11a45bf-b936-463c-ad9c-71bcac4e4532-kube-api-access-zsf44\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.319918 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a11a45bf-b936-463c-ad9c-71bcac4e4532-serving-cert\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.319961 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-config\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.319988 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16eea74f-412f-498f-aed1-56d35233539d-config\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.320009 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16eea74f-412f-498f-aed1-56d35233539d-client-ca\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.320042 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2hf\" (UniqueName: \"kubernetes.io/projected/16eea74f-412f-498f-aed1-56d35233539d-kube-api-access-5q2hf\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.320070 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16eea74f-412f-498f-aed1-56d35233539d-serving-cert\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.320104 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-proxy-ca-bundles\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.320126 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-client-ca\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.321278 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16eea74f-412f-498f-aed1-56d35233539d-client-ca\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.321301 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-client-ca\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.322854 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-proxy-ca-bundles\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.322910 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-config\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.325207 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16eea74f-412f-498f-aed1-56d35233539d-config\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.325297 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16eea74f-412f-498f-aed1-56d35233539d-serving-cert\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.325326 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a11a45bf-b936-463c-ad9c-71bcac4e4532-serving-cert\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.336876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsf44\" (UniqueName: \"kubernetes.io/projected/a11a45bf-b936-463c-ad9c-71bcac4e4532-kube-api-access-zsf44\") pod \"controller-manager-6b565b4bcd-p6jfs\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.338154 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2hf\" (UniqueName: \"kubernetes.io/projected/16eea74f-412f-498f-aed1-56d35233539d-kube-api-access-5q2hf\") pod \"route-controller-manager-598bfcc8c5-2pjj6\" (UID: \"16eea74f-412f-498f-aed1-56d35233539d\") " pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.420932 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" event={"ID":"1dced0a5-18f4-4cf6-b497-1d8dad926744","Type":"ContainerDied","Data":"51df96a2af44185c52e8197f3af37d52bf62ee4df4dfcaec035f6235c26bee17"} Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.420981 4970 scope.go:117] "RemoveContainer" containerID="0ff82519db8ea4f9009f1d602f2388d5f5bfbb4357608803f5c661fde8dde93b" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.421296 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-4bx5d" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.423127 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" event={"ID":"95531044-b9d4-4231-ac7c-be2850f2cbfd","Type":"ContainerDied","Data":"0e1506fdc57bec566ed47f3124d42ad997fc0adc331d43b82719ce89474cfb5b"} Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.423285 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.424792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" event={"ID":"ec8d8438-915d-4023-aa92-09659b107ce9","Type":"ContainerStarted","Data":"87ba8d34ff532ad825d4f531a7151977d974856083dbfc289eaaf550346b12b3"} Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.425993 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" event={"ID":"026bb6df-f015-4abc-92e2-81d9452dd101","Type":"ContainerStarted","Data":"3f4992dfc974f2656d1887fcd87eac6288d3b5a4eeec277e088e74f9dc385a63"} Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.426153 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" event={"ID":"026bb6df-f015-4abc-92e2-81d9452dd101","Type":"ContainerStarted","Data":"95df065382342737887508f567270522eafb15b6e3eca8d01398305b10cb4370"} Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.426290 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.428643 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.436165 4970 scope.go:117] "RemoveContainer" containerID="b6532907579e2bad2fa0e66cec8c4a64aeca2c0a46c571583e23be3bbe4c0138" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.442943 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.446909 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgssk"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.450674 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bx5d"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.453856 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-4bx5d"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.459349 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nhq2r" podStartSLOduration=2.459331852 podStartE2EDuration="2.459331852s" podCreationTimestamp="2025-12-09 12:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:11:50.458157008 +0000 UTC m=+323.018638059" watchObservedRunningTime="2025-12-09 12:11:50.459331852 +0000 UTC m=+323.019812903" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.505341 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.514430 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.712594 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6"] Dec 09 12:11:50 crc kubenswrapper[4970]: I1209 12:11:50.749958 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs"] Dec 09 12:11:50 crc kubenswrapper[4970]: W1209 12:11:50.757737 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda11a45bf_b936_463c_ad9c_71bcac4e4532.slice/crio-89c2812841bfa2f298ec1bb4f8597eca0c57bbe162564a82cb2f019fbba944bf WatchSource:0}: Error finding container 89c2812841bfa2f298ec1bb4f8597eca0c57bbe162564a82cb2f019fbba944bf: Status 404 returned error can't find the container with id 89c2812841bfa2f298ec1bb4f8597eca0c57bbe162564a82cb2f019fbba944bf Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.436333 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" event={"ID":"16eea74f-412f-498f-aed1-56d35233539d","Type":"ContainerStarted","Data":"bb7de6f9222360ea456b903694fa6e8af2b7f1bf6984835783998b3d1b6fdd1c"} Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.438372 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" event={"ID":"16eea74f-412f-498f-aed1-56d35233539d","Type":"ContainerStarted","Data":"dd4daa1fc9099620c72b665f70daf3a7fc73b7b654cebd6674d8a243b91adc2c"} Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.438494 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.438842 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" event={"ID":"a11a45bf-b936-463c-ad9c-71bcac4e4532","Type":"ContainerStarted","Data":"bfac030b4128f31330f1dec4c67fd068b2afd0129d8dfde88cedb6142c065f00"} Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.438959 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" event={"ID":"a11a45bf-b936-463c-ad9c-71bcac4e4532","Type":"ContainerStarted","Data":"89c2812841bfa2f298ec1bb4f8597eca0c57bbe162564a82cb2f019fbba944bf"} Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.439373 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.443142 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.448005 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.540709 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-598bfcc8c5-2pjj6" podStartSLOduration=2.540694707 podStartE2EDuration="2.540694707s" podCreationTimestamp="2025-12-09 12:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:11:51.499383079 +0000 UTC m=+324.059864140" watchObservedRunningTime="2025-12-09 12:11:51.540694707 +0000 UTC m=+324.101175758" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.565678 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" podStartSLOduration=2.565656901 podStartE2EDuration="2.565656901s" podCreationTimestamp="2025-12-09 12:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:11:51.563564411 +0000 UTC m=+324.124045472" watchObservedRunningTime="2025-12-09 12:11:51.565656901 +0000 UTC m=+324.126137952" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.821157 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dced0a5-18f4-4cf6-b497-1d8dad926744" path="/var/lib/kubelet/pods/1dced0a5-18f4-4cf6-b497-1d8dad926744/volumes" Dec 09 12:11:51 crc kubenswrapper[4970]: I1209 12:11:51.822617 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95531044-b9d4-4231-ac7c-be2850f2cbfd" path="/var/lib/kubelet/pods/95531044-b9d4-4231-ac7c-be2850f2cbfd/volumes" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.452088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" event={"ID":"ec8d8438-915d-4023-aa92-09659b107ce9","Type":"ContainerStarted","Data":"59e66ac2fd0ac136ae12e778317cbf8236abca2f1bd4960eb3563ce462f20ded"} Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.464370 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-8q6pf" podStartSLOduration=1.9389721180000001 podStartE2EDuration="5.464351355s" podCreationTimestamp="2025-12-09 12:11:48 +0000 UTC" firstStartedPulling="2025-12-09 12:11:49.562348594 +0000 UTC m=+322.122829645" lastFinishedPulling="2025-12-09 12:11:53.087727831 +0000 UTC m=+325.648208882" observedRunningTime="2025-12-09 12:11:53.464035336 +0000 UTC m=+326.024516407" watchObservedRunningTime="2025-12-09 12:11:53.464351355 +0000 UTC m=+326.024832406" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.648911 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk"] Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.649682 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.652033 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.652394 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-knb6g" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.666350 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk"] Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.768155 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cd18c470-0773-47ce-8042-42f4493883da-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zcmnk\" (UID: \"cd18c470-0773-47ce-8042-42f4493883da\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.870473 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cd18c470-0773-47ce-8042-42f4493883da-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zcmnk\" (UID: \"cd18c470-0773-47ce-8042-42f4493883da\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.877155 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/cd18c470-0773-47ce-8042-42f4493883da-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-zcmnk\" (UID: \"cd18c470-0773-47ce-8042-42f4493883da\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:53 crc kubenswrapper[4970]: I1209 12:11:53.965674 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:54 crc kubenswrapper[4970]: I1209 12:11:54.349855 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk"] Dec 09 12:11:54 crc kubenswrapper[4970]: W1209 12:11:54.357043 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd18c470_0773_47ce_8042_42f4493883da.slice/crio-e20e7cc95b86e660c650497df1817cd6d018d03463e09a5bd756cd868423f8ca WatchSource:0}: Error finding container e20e7cc95b86e660c650497df1817cd6d018d03463e09a5bd756cd868423f8ca: Status 404 returned error can't find the container with id e20e7cc95b86e660c650497df1817cd6d018d03463e09a5bd756cd868423f8ca Dec 09 12:11:54 crc kubenswrapper[4970]: I1209 12:11:54.460188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" event={"ID":"cd18c470-0773-47ce-8042-42f4493883da","Type":"ContainerStarted","Data":"e20e7cc95b86e660c650497df1817cd6d018d03463e09a5bd756cd868423f8ca"} Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.474076 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" event={"ID":"cd18c470-0773-47ce-8042-42f4493883da","Type":"ContainerStarted","Data":"73a4f82835b334a526c127bd3b06ecc1e360026ce308b24f1458e36f5c21408e"} Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.474456 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.480026 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.487867 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-zcmnk" podStartSLOduration=1.882840895 podStartE2EDuration="3.487854612s" podCreationTimestamp="2025-12-09 12:11:53 +0000 UTC" firstStartedPulling="2025-12-09 12:11:54.358935943 +0000 UTC m=+326.919417014" lastFinishedPulling="2025-12-09 12:11:55.96394968 +0000 UTC m=+328.524430731" observedRunningTime="2025-12-09 12:11:56.485781674 +0000 UTC m=+329.046262735" watchObservedRunningTime="2025-12-09 12:11:56.487854612 +0000 UTC m=+329.048335663" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.700044 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4nfjj"] Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.700843 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.702361 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9kszp" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.702572 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.702962 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.703621 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.725649 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4nfjj"] Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.809698 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6c64eed4-c792-4209-a3d1-f670b308d2c4-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.809780 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6c64eed4-c792-4209-a3d1-f670b308d2c4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.809811 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np54h\" (UniqueName: \"kubernetes.io/projected/6c64eed4-c792-4209-a3d1-f670b308d2c4-kube-api-access-np54h\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.810005 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c64eed4-c792-4209-a3d1-f670b308d2c4-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.911091 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6c64eed4-c792-4209-a3d1-f670b308d2c4-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.911636 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6c64eed4-c792-4209-a3d1-f670b308d2c4-metrics-client-ca\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.911469 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6c64eed4-c792-4209-a3d1-f670b308d2c4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.912203 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np54h\" (UniqueName: \"kubernetes.io/projected/6c64eed4-c792-4209-a3d1-f670b308d2c4-kube-api-access-np54h\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.912526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c64eed4-c792-4209-a3d1-f670b308d2c4-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.931273 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6c64eed4-c792-4209-a3d1-f670b308d2c4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.931420 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6c64eed4-c792-4209-a3d1-f670b308d2c4-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:56 crc kubenswrapper[4970]: I1209 12:11:56.935939 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np54h\" (UniqueName: \"kubernetes.io/projected/6c64eed4-c792-4209-a3d1-f670b308d2c4-kube-api-access-np54h\") pod \"prometheus-operator-db54df47d-4nfjj\" (UID: \"6c64eed4-c792-4209-a3d1-f670b308d2c4\") " pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:57 crc kubenswrapper[4970]: I1209 12:11:57.015611 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" Dec 09 12:11:57 crc kubenswrapper[4970]: I1209 12:11:57.476530 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-4nfjj"] Dec 09 12:11:58 crc kubenswrapper[4970]: I1209 12:11:58.486412 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" event={"ID":"6c64eed4-c792-4209-a3d1-f670b308d2c4","Type":"ContainerStarted","Data":"0515d92c111524d033dc28e5b09c5a4ce5ee381151895cb27f13e63fc55e4fa7"} Dec 09 12:11:59 crc kubenswrapper[4970]: I1209 12:11:59.496538 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" event={"ID":"6c64eed4-c792-4209-a3d1-f670b308d2c4","Type":"ContainerStarted","Data":"252aa44756160e2c20897b9687e1fbb166c45c233eda86a3cfdaa19c5a26d76e"} Dec 09 12:11:59 crc kubenswrapper[4970]: I1209 12:11:59.496973 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" event={"ID":"6c64eed4-c792-4209-a3d1-f670b308d2c4","Type":"ContainerStarted","Data":"d2a37a086cd08380d1aca289965f3b4b3534480a05a113c4fc984369e2bd4c48"} Dec 09 12:11:59 crc kubenswrapper[4970]: I1209 12:11:59.517679 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-4nfjj" podStartSLOduration=1.9200570510000001 podStartE2EDuration="3.517663002s" podCreationTimestamp="2025-12-09 12:11:56 +0000 UTC" firstStartedPulling="2025-12-09 12:11:57.487735054 +0000 UTC m=+330.048216105" lastFinishedPulling="2025-12-09 12:11:59.085341005 +0000 UTC m=+331.645822056" observedRunningTime="2025-12-09 12:11:59.517064005 +0000 UTC m=+332.077545056" watchObservedRunningTime="2025-12-09 12:11:59.517663002 +0000 UTC m=+332.078144053" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.043562 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-27l8b"] Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.045189 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.048500 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.048533 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.053255 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt"] Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.055051 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.062098 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-nz2ld" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.062492 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.062733 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.063359 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.063795 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-jdp5q" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.075337 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-27l8b"] Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.080999 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-84k2b"] Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.082852 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.084932 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.084995 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.085790 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt"] Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.086654 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9bfln" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171560 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/63ea5241-744d-4bce-b31b-3b31dd8ce422-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171638 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171679 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171707 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llbgl\" (UniqueName: \"kubernetes.io/projected/63ea5241-744d-4bce-b31b-3b31dd8ce422-kube-api-access-llbgl\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171762 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/63ea5241-744d-4bce-b31b-3b31dd8ce422-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171792 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbb78\" (UniqueName: \"kubernetes.io/projected/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-api-access-kbb78\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171814 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d8e74bf0-6863-493e-a79a-88b0aacdafbb-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171851 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/63ea5241-744d-4bce-b31b-3b31dd8ce422-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.171881 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d8e74bf0-6863-493e-a79a-88b0aacdafbb-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.273212 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.273298 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.273327 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llbgl\" (UniqueName: \"kubernetes.io/projected/63ea5241-744d-4bce-b31b-3b31dd8ce422-kube-api-access-llbgl\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274567 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274604 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274634 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-metrics-client-ca\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274658 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/63ea5241-744d-4bce-b31b-3b31dd8ce422-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274680 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-textfile\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274707 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbb78\" (UniqueName: \"kubernetes.io/projected/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-api-access-kbb78\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274732 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-wtmp\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274755 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d8e74bf0-6863-493e-a79a-88b0aacdafbb-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274794 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xxnf\" (UniqueName: \"kubernetes.io/projected/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-kube-api-access-5xxnf\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274819 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/63ea5241-744d-4bce-b31b-3b31dd8ce422-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274839 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-sys\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274864 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-root\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274894 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d8e74bf0-6863-493e-a79a-88b0aacdafbb-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274917 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.275137 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/63ea5241-744d-4bce-b31b-3b31dd8ce422-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.275855 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/63ea5241-744d-4bce-b31b-3b31dd8ce422-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.274517 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.276524 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/d8e74bf0-6863-493e-a79a-88b0aacdafbb-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.277418 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d8e74bf0-6863-493e-a79a-88b0aacdafbb-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.280965 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.281033 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/63ea5241-744d-4bce-b31b-3b31dd8ce422-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.282078 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/63ea5241-744d-4bce-b31b-3b31dd8ce422-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.287449 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.302077 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbb78\" (UniqueName: \"kubernetes.io/projected/d8e74bf0-6863-493e-a79a-88b0aacdafbb-kube-api-access-kbb78\") pod \"kube-state-metrics-777cb5bd5d-7l6zt\" (UID: \"d8e74bf0-6863-493e-a79a-88b0aacdafbb\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.305724 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llbgl\" (UniqueName: \"kubernetes.io/projected/63ea5241-744d-4bce-b31b-3b31dd8ce422-kube-api-access-llbgl\") pod \"openshift-state-metrics-566fddb674-27l8b\" (UID: \"63ea5241-744d-4bce-b31b-3b31dd8ce422\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.361559 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376178 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-metrics-client-ca\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376435 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-textfile\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376462 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-wtmp\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376488 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xxnf\" (UniqueName: \"kubernetes.io/projected/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-kube-api-access-5xxnf\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376503 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-sys\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376522 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-root\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376544 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.376587 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: E1209 12:12:01.376701 4970 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Dec 09 12:12:01 crc kubenswrapper[4970]: E1209 12:12:01.376746 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls podName:e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a nodeName:}" failed. No retries permitted until 2025-12-09 12:12:01.876731054 +0000 UTC m=+334.437212095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls") pod "node-exporter-84k2b" (UID: "e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a") : secret "node-exporter-tls" not found Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.377051 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-root\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.377052 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-sys\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.377181 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-wtmp\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.377287 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.377404 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-textfile\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.377529 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-metrics-client-ca\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.383542 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.397733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xxnf\" (UniqueName: \"kubernetes.io/projected/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-kube-api-access-5xxnf\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.836323 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt"] Dec 09 12:12:01 crc kubenswrapper[4970]: W1209 12:12:01.850351 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e74bf0_6863_493e_a79a_88b0aacdafbb.slice/crio-bb75ea915f87fc76922a0c44f730a4ecef7f12fef43c2ac1b8ed3f69c2314577 WatchSource:0}: Error finding container bb75ea915f87fc76922a0c44f730a4ecef7f12fef43c2ac1b8ed3f69c2314577: Status 404 returned error can't find the container with id bb75ea915f87fc76922a0c44f730a4ecef7f12fef43c2ac1b8ed3f69c2314577 Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.881106 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-27l8b"] Dec 09 12:12:01 crc kubenswrapper[4970]: I1209 12:12:01.882860 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:01 crc kubenswrapper[4970]: E1209 12:12:01.883648 4970 secret.go:188] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Dec 09 12:12:01 crc kubenswrapper[4970]: E1209 12:12:01.883691 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls podName:e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a nodeName:}" failed. No retries permitted until 2025-12-09 12:12:02.883678122 +0000 UTC m=+335.444159163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls") pod "node-exporter-84k2b" (UID: "e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a") : secret "node-exporter-tls" not found Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.135418 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.137515 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.141055 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.142500 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.142541 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.142939 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-v82jd" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.143043 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.144023 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.144959 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.145423 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.151034 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.158059 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187356 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187412 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187442 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcwfb\" (UniqueName: \"kubernetes.io/projected/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-kube-api-access-gcwfb\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187501 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187541 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187623 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-config-out\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187695 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187722 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187766 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187791 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187862 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-config-volume\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.187906 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-web-config\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.288891 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.288936 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.288966 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-config-out\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.288985 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289002 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289048 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289078 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-config-volume\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289102 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-web-config\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289125 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289145 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289163 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcwfb\" (UniqueName: \"kubernetes.io/projected/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-kube-api-access-gcwfb\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: E1209 12:12:02.289232 4970 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Dec 09 12:12:02 crc kubenswrapper[4970]: E1209 12:12:02.289344 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls podName:527ab3cb-c2bf-4904-9ca0-08fd1b20350f nodeName:}" failed. No retries permitted until 2025-12-09 12:12:02.789325142 +0000 UTC m=+335.349806193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "527ab3cb-c2bf-4904-9ca0-08fd1b20350f") : secret "alertmanager-main-tls" not found Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289688 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.289959 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.290725 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.294897 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.294930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.295064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-config-out\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.295911 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.296714 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-web-config\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.296743 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-tls-assets\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.298010 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-config-volume\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.309028 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcwfb\" (UniqueName: \"kubernetes.io/projected/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-kube-api-access-gcwfb\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.527642 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" event={"ID":"d8e74bf0-6863-493e-a79a-88b0aacdafbb","Type":"ContainerStarted","Data":"bb75ea915f87fc76922a0c44f730a4ecef7f12fef43c2ac1b8ed3f69c2314577"} Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.538485 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" event={"ID":"63ea5241-744d-4bce-b31b-3b31dd8ce422","Type":"ContainerStarted","Data":"a5a64490546324c462addafee94ff382c3ff7885e91a95d2f7d02ea31ad4de2d"} Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.538609 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" event={"ID":"63ea5241-744d-4bce-b31b-3b31dd8ce422","Type":"ContainerStarted","Data":"6ba8857ae930fb6f3ea157f8142c0db98fdf2dc97f4bc0aa9835cf65ac6aa533"} Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.538628 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" event={"ID":"63ea5241-744d-4bce-b31b-3b31dd8ce422","Type":"ContainerStarted","Data":"3bcb94487614c41eb531983807309a641ab8d820fa3caf2eaa02f029e389dd0f"} Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.799928 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:02 crc kubenswrapper[4970]: E1209 12:12:02.800100 4970 secret.go:188] Couldn't get secret openshift-monitoring/alertmanager-main-tls: secret "alertmanager-main-tls" not found Dec 09 12:12:02 crc kubenswrapper[4970]: E1209 12:12:02.800194 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls podName:527ab3cb-c2bf-4904-9ca0-08fd1b20350f nodeName:}" failed. No retries permitted until 2025-12-09 12:12:03.800176388 +0000 UTC m=+336.360657439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "527ab3cb-c2bf-4904-9ca0-08fd1b20350f") : secret "alertmanager-main-tls" not found Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.901441 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:02 crc kubenswrapper[4970]: I1209 12:12:02.905472 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a-node-exporter-tls\") pod \"node-exporter-84k2b\" (UID: \"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a\") " pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.032959 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-79b4c8877b-zqvdr"] Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.034781 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.037774 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.037804 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.038048 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-2a07o99rs54im" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.038050 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-xtxhx" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.038125 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.038165 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.038210 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.051504 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-79b4c8877b-zqvdr"] Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.104631 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-grpc-tls\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.104699 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.104875 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-metrics-client-ca\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.104938 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.105003 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.105051 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-tls\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.105080 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5sk7\" (UniqueName: \"kubernetes.io/projected/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-kube-api-access-w5sk7\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.105107 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.198391 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-84k2b" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.207671 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.207742 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.207784 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-tls\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.207806 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5sk7\" (UniqueName: \"kubernetes.io/projected/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-kube-api-access-w5sk7\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.207852 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.208215 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-grpc-tls\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.208668 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.208997 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-metrics-client-ca\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.209827 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-metrics-client-ca\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.211170 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.211853 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-grpc-tls\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.212276 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-tls\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.213489 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.216161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.222771 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.227998 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5sk7\" (UniqueName: \"kubernetes.io/projected/8e31e56e-ed2d-450f-ba7f-aaf3cce168ba-kube-api-access-w5sk7\") pod \"thanos-querier-79b4c8877b-zqvdr\" (UID: \"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba\") " pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.367467 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.815917 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.821271 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/527ab3cb-c2bf-4904-9ca0-08fd1b20350f-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"527ab3cb-c2bf-4904-9ca0-08fd1b20350f\") " pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:03 crc kubenswrapper[4970]: I1209 12:12:03.952596 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.354866 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-64c6668cd-k7dhn"] Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.356964 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.369557 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.369994 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.370089 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-fdepiss7elm04" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.371033 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-29w9d" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.372022 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.372062 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.375010 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-64c6668cd-k7dhn"] Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450031 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450082 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-client-certs\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450114 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-server-tls\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450149 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65xbr\" (UniqueName: \"kubernetes.io/projected/088add4e-94cf-435b-8190-75d69aa580ea-kube-api-access-65xbr\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450173 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/088add4e-94cf-435b-8190-75d69aa580ea-audit-log\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450196 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-metrics-server-audit-profiles\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.450238 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-client-ca-bundle\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: W1209 12:12:06.539476 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6d403d5_f7bb_4f3f_a5ae_24f4af216e7a.slice/crio-fed32cd71d2c7487bdb4f4d3a94f8647cbdcc2e8df1b8e742368ed623db93ad3 WatchSource:0}: Error finding container fed32cd71d2c7487bdb4f4d3a94f8647cbdcc2e8df1b8e742368ed623db93ad3: Status 404 returned error can't find the container with id fed32cd71d2c7487bdb4f4d3a94f8647cbdcc2e8df1b8e742368ed623db93ad3 Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551127 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551190 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-client-certs\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-server-tls\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551290 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65xbr\" (UniqueName: \"kubernetes.io/projected/088add4e-94cf-435b-8190-75d69aa580ea-kube-api-access-65xbr\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551319 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/088add4e-94cf-435b-8190-75d69aa580ea-audit-log\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551342 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-metrics-server-audit-profiles\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.551386 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-client-ca-bundle\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.552410 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/088add4e-94cf-435b-8190-75d69aa580ea-audit-log\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.552795 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.555051 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-metrics-server-audit-profiles\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.562963 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-server-tls\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.563299 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-client-certs\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.565478 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-client-ca-bundle\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.566904 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65xbr\" (UniqueName: \"kubernetes.io/projected/088add4e-94cf-435b-8190-75d69aa580ea-kube-api-access-65xbr\") pod \"metrics-server-64c6668cd-k7dhn\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.580201 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-84k2b" event={"ID":"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a","Type":"ContainerStarted","Data":"fed32cd71d2c7487bdb4f4d3a94f8647cbdcc2e8df1b8e742368ed623db93ad3"} Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.706333 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:06 crc kubenswrapper[4970]: I1209 12:12:06.947623 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-79b4c8877b-zqvdr"] Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.029028 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.206683 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-64c6668cd-k7dhn"] Dec 09 12:12:07 crc kubenswrapper[4970]: W1209 12:12:07.213000 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088add4e_94cf_435b_8190_75d69aa580ea.slice/crio-c703f0fe694b08bc79e4a3711c3c2d17769a7523d7963932ae900276f98a2577 WatchSource:0}: Error finding container c703f0fe694b08bc79e4a3711c3c2d17769a7523d7963932ae900276f98a2577: Status 404 returned error can't find the container with id c703f0fe694b08bc79e4a3711c3c2d17769a7523d7963932ae900276f98a2577 Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.390801 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.393009 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.395166 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.395414 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.395532 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.396333 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-7sctdp1l8d1lq" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.396384 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.396399 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.398497 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.398873 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.399699 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.400094 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-hgpxp" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.400153 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.424505 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.425782 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.426871 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575453 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575528 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74w2\" (UniqueName: \"kubernetes.io/projected/6a0407cf-adba-4186-83b2-a32df6b4fc82-kube-api-access-b74w2\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575595 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575612 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575628 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-web-config\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575647 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575669 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575685 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575704 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575721 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575738 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6a0407cf-adba-4186-83b2-a32df6b4fc82-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575785 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-config\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575803 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575829 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575848 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6a0407cf-adba-4186-83b2-a32df6b4fc82-config-out\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.575864 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.587886 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" event={"ID":"63ea5241-744d-4bce-b31b-3b31dd8ce422","Type":"ContainerStarted","Data":"77fc06a08f31248611f1a31af9b987c956a9f770298d399f9e5f39091d15b86e"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.592381 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" event={"ID":"d8e74bf0-6863-493e-a79a-88b0aacdafbb","Type":"ContainerStarted","Data":"072a24e73b11296ba9e9042636291adeee71b5dbe4815b0f12a2d57895df4852"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.592415 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" event={"ID":"d8e74bf0-6863-493e-a79a-88b0aacdafbb","Type":"ContainerStarted","Data":"877d72186fbef913bc512db210add263ef96b97e4a3a1aaf53eeec4e5befc096"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.592425 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" event={"ID":"d8e74bf0-6863-493e-a79a-88b0aacdafbb","Type":"ContainerStarted","Data":"a75f0847b187768bacc7999c777a29edfbe8cdf7f93bcd036a3c300e2634aa3c"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.595349 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"de996f0c4b60bd73e61be6dfd29593d0fdecd688299a83e607e71ad86bab5ab7"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.597642 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" event={"ID":"088add4e-94cf-435b-8190-75d69aa580ea","Type":"ContainerStarted","Data":"c703f0fe694b08bc79e4a3711c3c2d17769a7523d7963932ae900276f98a2577"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.604035 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"e2f28cf535656b8cafe74a7263cfd52c70ca540352a4a96e680caf41348eaedd"} Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.606995 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-27l8b" podStartSLOduration=2.323170995 podStartE2EDuration="6.606975183s" podCreationTimestamp="2025-12-09 12:12:01 +0000 UTC" firstStartedPulling="2025-12-09 12:12:02.251689688 +0000 UTC m=+334.812170739" lastFinishedPulling="2025-12-09 12:12:06.535493876 +0000 UTC m=+339.095974927" observedRunningTime="2025-12-09 12:12:07.604586516 +0000 UTC m=+340.165067597" watchObservedRunningTime="2025-12-09 12:12:07.606975183 +0000 UTC m=+340.167456234" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.631717 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-7l6zt" podStartSLOduration=1.9538194610000001 podStartE2EDuration="6.631696915s" podCreationTimestamp="2025-12-09 12:12:01 +0000 UTC" firstStartedPulling="2025-12-09 12:12:01.856613874 +0000 UTC m=+334.417094935" lastFinishedPulling="2025-12-09 12:12:06.534491338 +0000 UTC m=+339.094972389" observedRunningTime="2025-12-09 12:12:07.627360874 +0000 UTC m=+340.187841935" watchObservedRunningTime="2025-12-09 12:12:07.631696915 +0000 UTC m=+340.192177966" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677531 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677595 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6a0407cf-adba-4186-83b2-a32df6b4fc82-config-out\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677789 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677822 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677921 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b74w2\" (UniqueName: \"kubernetes.io/projected/6a0407cf-adba-4186-83b2-a32df6b4fc82-kube-api-access-b74w2\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677943 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677969 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.677990 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678011 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-web-config\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678044 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678077 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678098 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678121 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678145 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678184 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678268 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6a0407cf-adba-4186-83b2-a32df6b4fc82-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678292 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-config\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.678314 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.679106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.679615 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.680167 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.681124 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.685027 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.685069 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.685955 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.688059 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.688306 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.688965 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6a0407cf-adba-4186-83b2-a32df6b4fc82-config-out\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.689480 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.689995 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6a0407cf-adba-4186-83b2-a32df6b4fc82-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.690840 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6a0407cf-adba-4186-83b2-a32df6b4fc82-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.690974 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.692814 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-web-config\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.692959 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-config\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.701399 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b74w2\" (UniqueName: \"kubernetes.io/projected/6a0407cf-adba-4186-83b2-a32df6b4fc82-kube-api-access-b74w2\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.707917 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/6a0407cf-adba-4186-83b2-a32df6b4fc82-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"6a0407cf-adba-4186-83b2-a32df6b4fc82\") " pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.728277 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.842962 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-56499464c7-2qkcl"] Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.844512 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.850377 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.850509 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.855359 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-56499464c7-2qkcl"] Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.888546 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f67d07f0-b9f1-4704-be66-9c152defe602-monitoring-plugin-cert\") pod \"monitoring-plugin-56499464c7-2qkcl\" (UID: \"f67d07f0-b9f1-4704-be66-9c152defe602\") " pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.989399 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f67d07f0-b9f1-4704-be66-9c152defe602-monitoring-plugin-cert\") pod \"monitoring-plugin-56499464c7-2qkcl\" (UID: \"f67d07f0-b9f1-4704-be66-9c152defe602\") " pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:07 crc kubenswrapper[4970]: I1209 12:12:07.995128 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f67d07f0-b9f1-4704-be66-9c152defe602-monitoring-plugin-cert\") pod \"monitoring-plugin-56499464c7-2qkcl\" (UID: \"f67d07f0-b9f1-4704-be66-9c152defe602\") " pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:08 crc kubenswrapper[4970]: I1209 12:12:08.216531 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:08 crc kubenswrapper[4970]: I1209 12:12:08.303110 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Dec 09 12:12:08 crc kubenswrapper[4970]: W1209 12:12:08.319937 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a0407cf_adba_4186_83b2_a32df6b4fc82.slice/crio-84e32b7b794e01010e8dffb07d5114aee20d26175cd50622426ab2606b7e3479 WatchSource:0}: Error finding container 84e32b7b794e01010e8dffb07d5114aee20d26175cd50622426ab2606b7e3479: Status 404 returned error can't find the container with id 84e32b7b794e01010e8dffb07d5114aee20d26175cd50622426ab2606b7e3479 Dec 09 12:12:08 crc kubenswrapper[4970]: I1209 12:12:08.617542 4970 generic.go:334] "Generic (PLEG): container finished" podID="e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a" containerID="e0d2c862f264e89fa9e7d4ba1cbc77eaca9a10a4fb36532d9d10fb86485a9edd" exitCode=0 Dec 09 12:12:08 crc kubenswrapper[4970]: I1209 12:12:08.617647 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-84k2b" event={"ID":"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a","Type":"ContainerDied","Data":"e0d2c862f264e89fa9e7d4ba1cbc77eaca9a10a4fb36532d9d10fb86485a9edd"} Dec 09 12:12:08 crc kubenswrapper[4970]: I1209 12:12:08.623085 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"84e32b7b794e01010e8dffb07d5114aee20d26175cd50622426ab2606b7e3479"} Dec 09 12:12:08 crc kubenswrapper[4970]: I1209 12:12:08.631587 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-56499464c7-2qkcl"] Dec 09 12:12:09 crc kubenswrapper[4970]: I1209 12:12:09.636377 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" event={"ID":"f67d07f0-b9f1-4704-be66-9c152defe602","Type":"ContainerStarted","Data":"87d76ae2b1c110fea066801e4a83ab860c32962dd9842666f4e2e940579e0ac5"} Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.762487 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lcwvc"] Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.763846 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.778562 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lcwvc"] Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944385 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-bound-sa-token\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944458 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-registry-tls\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944482 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944601 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-registry-certificates\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944655 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-trusted-ca\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944702 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzhrz\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-kube-api-access-xzhrz\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944736 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.944767 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:10 crc kubenswrapper[4970]: I1209 12:12:10.976999 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046611 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-bound-sa-token\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046715 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-registry-tls\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046733 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046765 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-registry-certificates\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046783 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-trusted-ca\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046804 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzhrz\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-kube-api-access-xzhrz\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.046823 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.048177 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-registry-certificates\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.048420 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.061572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-registry-tls\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.062074 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-trusted-ca\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.062633 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-bound-sa-token\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.063219 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzhrz\" (UniqueName: \"kubernetes.io/projected/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-kube-api-access-xzhrz\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.064826 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/df24d682-ae2b-430f-aed2-9d99ab6d4ef9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-lcwvc\" (UID: \"df24d682-ae2b-430f-aed2-9d99ab6d4ef9\") " pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:11 crc kubenswrapper[4970]: I1209 12:12:11.087809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.644953 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-lcwvc"] Dec 09 12:12:12 crc kubenswrapper[4970]: W1209 12:12:12.652392 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf24d682_ae2b_430f_aed2_9d99ab6d4ef9.slice/crio-fa14f223451442a6cc79e066f1a07e101033cdfe9ca16abce3c7611329670f37 WatchSource:0}: Error finding container fa14f223451442a6cc79e066f1a07e101033cdfe9ca16abce3c7611329670f37: Status 404 returned error can't find the container with id fa14f223451442a6cc79e066f1a07e101033cdfe9ca16abce3c7611329670f37 Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.662951 4970 generic.go:334] "Generic (PLEG): container finished" podID="6a0407cf-adba-4186-83b2-a32df6b4fc82" containerID="6d888b56bba44e5a798ede9995eb37492c354aa06ce7696faa489c2508e191c0" exitCode=0 Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.663028 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerDied","Data":"6d888b56bba44e5a798ede9995eb37492c354aa06ce7696faa489c2508e191c0"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.670516 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" event={"ID":"f67d07f0-b9f1-4704-be66-9c152defe602","Type":"ContainerStarted","Data":"e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.671944 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.676999 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-84k2b" event={"ID":"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a","Type":"ContainerStarted","Data":"18e290eaca7b2ef2c2b8378925f25155a1f524aed9310b0bcc6c2a7d49a48f60"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.680307 4970 generic.go:334] "Generic (PLEG): container finished" podID="527ab3cb-c2bf-4904-9ca0-08fd1b20350f" containerID="5880676f330cf4e40535c77e448af12a92bb6a23b5d8716dcbcc73baab1385b3" exitCode=0 Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.680471 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerDied","Data":"5880676f330cf4e40535c77e448af12a92bb6a23b5d8716dcbcc73baab1385b3"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.695713 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.714319 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" event={"ID":"088add4e-94cf-435b-8190-75d69aa580ea","Type":"ContainerStarted","Data":"35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.722331 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" podStartSLOduration=2.905509013 podStartE2EDuration="5.722307487s" podCreationTimestamp="2025-12-09 12:12:07 +0000 UTC" firstStartedPulling="2025-12-09 12:12:09.459941404 +0000 UTC m=+342.020422505" lastFinishedPulling="2025-12-09 12:12:12.276739938 +0000 UTC m=+344.837220979" observedRunningTime="2025-12-09 12:12:12.718282284 +0000 UTC m=+345.278763335" watchObservedRunningTime="2025-12-09 12:12:12.722307487 +0000 UTC m=+345.282788538" Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.751285 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"374b9d8ff8d9a9ecfffc4719ce6d828139bd1c096b779b72f5c567741a168cc8"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.752608 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" event={"ID":"df24d682-ae2b-430f-aed2-9d99ab6d4ef9","Type":"ContainerStarted","Data":"fa14f223451442a6cc79e066f1a07e101033cdfe9ca16abce3c7611329670f37"} Dec 09 12:12:12 crc kubenswrapper[4970]: I1209 12:12:12.794734 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" podStartSLOduration=1.759458542 podStartE2EDuration="6.794714584s" podCreationTimestamp="2025-12-09 12:12:06 +0000 UTC" firstStartedPulling="2025-12-09 12:12:07.215392706 +0000 UTC m=+339.775873757" lastFinishedPulling="2025-12-09 12:12:12.250648748 +0000 UTC m=+344.811129799" observedRunningTime="2025-12-09 12:12:12.791293479 +0000 UTC m=+345.351774550" watchObservedRunningTime="2025-12-09 12:12:12.794714584 +0000 UTC m=+345.355195635" Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.763698 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-84k2b" event={"ID":"e6d403d5-f7bb-4f3f-a5ae-24f4af216e7a","Type":"ContainerStarted","Data":"d68bf5daed56e7168927fab9a902cd7ebcb867e4802effc8e51940d2847248a5"} Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.767611 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"8f44eafa1064f478d1f56dc1d6135e23d0e1b2f3ca058679b0104737c680ebde"} Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.767648 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"a16f4805755a282ccba1551e4f902f6a0c7fb74f685f102b192af8d89ef88a32"} Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.769720 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" event={"ID":"df24d682-ae2b-430f-aed2-9d99ab6d4ef9","Type":"ContainerStarted","Data":"416854ef04b0a56965500f31a54b40a17120fbd99ddce21deb6a772764b14ed5"} Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.769857 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.785930 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-84k2b" podStartSLOduration=11.435177575 podStartE2EDuration="12.785912683s" podCreationTimestamp="2025-12-09 12:12:01 +0000 UTC" firstStartedPulling="2025-12-09 12:12:06.543929462 +0000 UTC m=+339.104410513" lastFinishedPulling="2025-12-09 12:12:07.89466457 +0000 UTC m=+340.455145621" observedRunningTime="2025-12-09 12:12:13.783400693 +0000 UTC m=+346.343881764" watchObservedRunningTime="2025-12-09 12:12:13.785912683 +0000 UTC m=+346.346393724" Dec 09 12:12:13 crc kubenswrapper[4970]: I1209 12:12:13.811444 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" podStartSLOduration=3.811417808 podStartE2EDuration="3.811417808s" podCreationTimestamp="2025-12-09 12:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:12:13.8064939 +0000 UTC m=+346.366974951" watchObservedRunningTime="2025-12-09 12:12:13.811417808 +0000 UTC m=+346.371898879" Dec 09 12:12:14 crc kubenswrapper[4970]: I1209 12:12:14.775265 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"caf9c4a809481e2585dc09bd1abf5586f1fa4379353277b8f293e253ac11b88b"} Dec 09 12:12:14 crc kubenswrapper[4970]: I1209 12:12:14.777762 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"3ceec9c211c311addbe91c3ce5626346911c87ed89b39f32906d9eeeff7ac621"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.011300 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.011648 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.798567 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"bebc67ccd0e63964af7ceac52377c8a78f56d76e6b10601061e8d5827ee63552"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.798611 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" event={"ID":"8e31e56e-ed2d-450f-ba7f-aaf3cce168ba","Type":"ContainerStarted","Data":"981b8e723067af55f91119ee32ef8782ac115d744580ebbf50888c077d6629b7"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.799449 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.808076 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6a0407cf-adba-4186-83b2-a32df6b4fc82/prometheus/0.log" Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.808541 4970 generic.go:334] "Generic (PLEG): container finished" podID="6a0407cf-adba-4186-83b2-a32df6b4fc82" containerID="2cba90efc98d5a26e4a0878930ffe2f980ff1f871c767b3f3cfb625378baf4a7" exitCode=1 Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.808620 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"b9c3443211f7429a9e813f7085ee62789701b7b54324488b8e4e404c92bb5a81"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.808680 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"3a17cf7534f750cc26c52828f3981ab945b5aa69185bf6f27e4363175ca532b6"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.808690 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"d73f11c51d6ba72ec664a33b59fe8f57143977fe280419e194947959dd61c8d6"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.808699 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerDied","Data":"2cba90efc98d5a26e4a0878930ffe2f980ff1f871c767b3f3cfb625378baf4a7"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.812352 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"3d64879383e88d2a1eb886aecd71565564a09aa4ffa2e5435559798a18bd4542"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.812384 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"123a807e140f8403c51e932e6aee7cfe5831c00dc4ee114284d26b0707be7318"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.812394 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"5c4efcaf63e1d9d4b10c8f34ca9b68ae088aaae255b4cb1f40dfdd494ec7cee5"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.812403 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"a89aa5cd0a2fda18604f92ca8dbbed270599c064d42731cc36a5d109fb7b4f3f"} Dec 09 12:12:16 crc kubenswrapper[4970]: I1209 12:12:16.822495 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" podStartSLOduration=6.39014895 podStartE2EDuration="13.822475322s" podCreationTimestamp="2025-12-09 12:12:03 +0000 UTC" firstStartedPulling="2025-12-09 12:12:06.961835245 +0000 UTC m=+339.522316296" lastFinishedPulling="2025-12-09 12:12:14.394161617 +0000 UTC m=+346.954642668" observedRunningTime="2025-12-09 12:12:16.815762074 +0000 UTC m=+349.376243125" watchObservedRunningTime="2025-12-09 12:12:16.822475322 +0000 UTC m=+349.382956373" Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.827284 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6a0407cf-adba-4186-83b2-a32df6b4fc82/prometheus/0.log" Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.827928 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"144493dae70461cb81fa5998528e34da9bd529c87b9abebdef830ceb30438215"} Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.827983 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"e9523471b508804ca105729a31d639cb94ae82bfb18208e89e194e56f711f809"} Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.829026 4970 scope.go:117] "RemoveContainer" containerID="2cba90efc98d5a26e4a0878930ffe2f980ff1f871c767b3f3cfb625378baf4a7" Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.831821 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"527ab3cb-c2bf-4904-9ca0-08fd1b20350f","Type":"ContainerStarted","Data":"9985889a5f9fdb6eb1af76e4479fae8934e3e4d6d7dc6b6ff58d8e14636ed5c8"} Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.840197 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-79b4c8877b-zqvdr" Dec 09 12:12:17 crc kubenswrapper[4970]: I1209 12:12:17.889232 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=8.516461052 podStartE2EDuration="15.889212906s" podCreationTimestamp="2025-12-09 12:12:02 +0000 UTC" firstStartedPulling="2025-12-09 12:12:07.035961351 +0000 UTC m=+339.596442402" lastFinishedPulling="2025-12-09 12:12:14.408713205 +0000 UTC m=+346.969194256" observedRunningTime="2025-12-09 12:12:17.88755716 +0000 UTC m=+350.448038231" watchObservedRunningTime="2025-12-09 12:12:17.889212906 +0000 UTC m=+350.449693957" Dec 09 12:12:18 crc kubenswrapper[4970]: I1209 12:12:18.840077 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6a0407cf-adba-4186-83b2-a32df6b4fc82/prometheus/1.log" Dec 09 12:12:18 crc kubenswrapper[4970]: I1209 12:12:18.842686 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6a0407cf-adba-4186-83b2-a32df6b4fc82/prometheus/0.log" Dec 09 12:12:18 crc kubenswrapper[4970]: I1209 12:12:18.843374 4970 generic.go:334] "Generic (PLEG): container finished" podID="6a0407cf-adba-4186-83b2-a32df6b4fc82" containerID="75836bb4126850b7e10d6f5ffb75498c563e4ab55ff00b9f4d070c948f660528" exitCode=1 Dec 09 12:12:18 crc kubenswrapper[4970]: I1209 12:12:18.843445 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerDied","Data":"75836bb4126850b7e10d6f5ffb75498c563e4ab55ff00b9f4d070c948f660528"} Dec 09 12:12:18 crc kubenswrapper[4970]: I1209 12:12:18.843517 4970 scope.go:117] "RemoveContainer" containerID="2cba90efc98d5a26e4a0878930ffe2f980ff1f871c767b3f3cfb625378baf4a7" Dec 09 12:12:18 crc kubenswrapper[4970]: I1209 12:12:18.844550 4970 scope.go:117] "RemoveContainer" containerID="75836bb4126850b7e10d6f5ffb75498c563e4ab55ff00b9f4d070c948f660528" Dec 09 12:12:18 crc kubenswrapper[4970]: E1209 12:12:18.845421 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(6a0407cf-adba-4186-83b2-a32df6b4fc82)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="6a0407cf-adba-4186-83b2-a32df6b4fc82" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.402841 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d95d9c75f-6pp2x"] Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.403736 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.421933 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d95d9c75f-6pp2x"] Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579129 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-oauth-config\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579513 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-serving-cert\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579547 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-service-ca\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579584 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wncd5\" (UniqueName: \"kubernetes.io/projected/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-kube-api-access-wncd5\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579609 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-config\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579782 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-oauth-serving-cert\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.579850 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-trusted-ca-bundle\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.680808 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-serving-cert\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.680871 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-service-ca\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.680901 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wncd5\" (UniqueName: \"kubernetes.io/projected/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-kube-api-access-wncd5\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.680928 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-config\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.680951 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-oauth-serving-cert\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.680976 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-trusted-ca-bundle\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.681021 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-oauth-config\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.683060 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-oauth-serving-cert\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.683106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-config\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.683425 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-service-ca\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.684291 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-trusted-ca-bundle\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.690007 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-oauth-config\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.697414 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wncd5\" (UniqueName: \"kubernetes.io/projected/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-kube-api-access-wncd5\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.702859 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-serving-cert\") pod \"console-5d95d9c75f-6pp2x\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.721177 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.850281 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6a0407cf-adba-4186-83b2-a32df6b4fc82/prometheus/1.log" Dec 09 12:12:19 crc kubenswrapper[4970]: I1209 12:12:19.852630 4970 scope.go:117] "RemoveContainer" containerID="75836bb4126850b7e10d6f5ffb75498c563e4ab55ff00b9f4d070c948f660528" Dec 09 12:12:19 crc kubenswrapper[4970]: E1209 12:12:19.852968 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(6a0407cf-adba-4186-83b2-a32df6b4fc82)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="6a0407cf-adba-4186-83b2-a32df6b4fc82" Dec 09 12:12:20 crc kubenswrapper[4970]: I1209 12:12:20.154525 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d95d9c75f-6pp2x"] Dec 09 12:12:20 crc kubenswrapper[4970]: W1209 12:12:20.163653 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32d9e699_1bc0_4f10_bf8f_ddcfc98d12d7.slice/crio-d9b87ca13e4bd2b32d7a5f53faddf34d832ce7d0e7c46c4a5f32a9ce7079eb3b WatchSource:0}: Error finding container d9b87ca13e4bd2b32d7a5f53faddf34d832ce7d0e7c46c4a5f32a9ce7079eb3b: Status 404 returned error can't find the container with id d9b87ca13e4bd2b32d7a5f53faddf34d832ce7d0e7c46c4a5f32a9ce7079eb3b Dec 09 12:12:20 crc kubenswrapper[4970]: I1209 12:12:20.857774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d95d9c75f-6pp2x" event={"ID":"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7","Type":"ContainerStarted","Data":"d9b87ca13e4bd2b32d7a5f53faddf34d832ce7d0e7c46c4a5f32a9ce7079eb3b"} Dec 09 12:12:21 crc kubenswrapper[4970]: I1209 12:12:21.865319 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d95d9c75f-6pp2x" event={"ID":"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7","Type":"ContainerStarted","Data":"fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032"} Dec 09 12:12:21 crc kubenswrapper[4970]: I1209 12:12:21.886005 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d95d9c75f-6pp2x" podStartSLOduration=2.885987456 podStartE2EDuration="2.885987456s" podCreationTimestamp="2025-12-09 12:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:12:21.882851528 +0000 UTC m=+354.443332579" watchObservedRunningTime="2025-12-09 12:12:21.885987456 +0000 UTC m=+354.446468507" Dec 09 12:12:22 crc kubenswrapper[4970]: I1209 12:12:22.729346 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:22 crc kubenswrapper[4970]: I1209 12:12:22.729685 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:22 crc kubenswrapper[4970]: I1209 12:12:22.730387 4970 scope.go:117] "RemoveContainer" containerID="75836bb4126850b7e10d6f5ffb75498c563e4ab55ff00b9f4d070c948f660528" Dec 09 12:12:22 crc kubenswrapper[4970]: E1209 12:12:22.730739 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=prometheus pod=prometheus-k8s-0_openshift-monitoring(6a0407cf-adba-4186-83b2-a32df6b4fc82)\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="6a0407cf-adba-4186-83b2-a32df6b4fc82" Dec 09 12:12:26 crc kubenswrapper[4970]: I1209 12:12:26.707330 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:26 crc kubenswrapper[4970]: I1209 12:12:26.707662 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:29 crc kubenswrapper[4970]: I1209 12:12:29.721812 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:29 crc kubenswrapper[4970]: I1209 12:12:29.722410 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:29 crc kubenswrapper[4970]: I1209 12:12:29.727620 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:29 crc kubenswrapper[4970]: I1209 12:12:29.915555 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:12:30 crc kubenswrapper[4970]: I1209 12:12:30.024327 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wqfw9"] Dec 09 12:12:31 crc kubenswrapper[4970]: I1209 12:12:31.093937 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-lcwvc" Dec 09 12:12:31 crc kubenswrapper[4970]: I1209 12:12:31.148094 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv8jl"] Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.526746 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ctsvn"] Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.528940 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.531297 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.546030 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctsvn"] Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.618021 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dtlv\" (UniqueName: \"kubernetes.io/projected/0ab6bbc4-e295-4f79-b8b1-5151d511d302-kube-api-access-8dtlv\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.618215 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-utilities\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.618670 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-catalog-content\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.720727 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-catalog-content\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.720785 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dtlv\" (UniqueName: \"kubernetes.io/projected/0ab6bbc4-e295-4f79-b8b1-5151d511d302-kube-api-access-8dtlv\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.720827 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-utilities\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.721464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-catalog-content\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.721536 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-utilities\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.725876 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bmvnx"] Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.727353 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.731565 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.742779 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmvnx"] Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.760946 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dtlv\" (UniqueName: \"kubernetes.io/projected/0ab6bbc4-e295-4f79-b8b1-5151d511d302-kube-api-access-8dtlv\") pod \"redhat-operators-ctsvn\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.822139 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-utilities\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.822182 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlsp7\" (UniqueName: \"kubernetes.io/projected/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-kube-api-access-vlsp7\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.822241 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-catalog-content\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.855819 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.923881 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-catalog-content\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.924289 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-utilities\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.924336 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlsp7\" (UniqueName: \"kubernetes.io/projected/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-kube-api-access-vlsp7\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.924659 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-catalog-content\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.926678 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-utilities\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:35 crc kubenswrapper[4970]: I1209 12:12:35.945440 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlsp7\" (UniqueName: \"kubernetes.io/projected/304240af-a6fc-4b3b-b99c-b494bfcf0c3e-kube-api-access-vlsp7\") pod \"community-operators-bmvnx\" (UID: \"304240af-a6fc-4b3b-b99c-b494bfcf0c3e\") " pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.047402 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.260472 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctsvn"] Dec 09 12:12:36 crc kubenswrapper[4970]: W1209 12:12:36.267599 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ab6bbc4_e295_4f79_b8b1_5151d511d302.slice/crio-54debbc02c07d2b64468da86ad623fa07baf4ffdd69a793148640178ff56d3b3 WatchSource:0}: Error finding container 54debbc02c07d2b64468da86ad623fa07baf4ffdd69a793148640178ff56d3b3: Status 404 returned error can't find the container with id 54debbc02c07d2b64468da86ad623fa07baf4ffdd69a793148640178ff56d3b3 Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.474866 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmvnx"] Dec 09 12:12:36 crc kubenswrapper[4970]: W1209 12:12:36.481572 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod304240af_a6fc_4b3b_b99c_b494bfcf0c3e.slice/crio-65231b67831748e0a2e39d7f1769c7a758c663fbfbbfaefe0ac2e4a00ea1ca27 WatchSource:0}: Error finding container 65231b67831748e0a2e39d7f1769c7a758c663fbfbbfaefe0ac2e4a00ea1ca27: Status 404 returned error can't find the container with id 65231b67831748e0a2e39d7f1769c7a758c663fbfbbfaefe0ac2e4a00ea1ca27 Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.812949 4970 scope.go:117] "RemoveContainer" containerID="75836bb4126850b7e10d6f5ffb75498c563e4ab55ff00b9f4d070c948f660528" Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.956809 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerStarted","Data":"bbdc27be80790bf191d98488876249f84e7a3f2e092bd5ce02a8391d5f0a7126"} Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.956881 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerStarted","Data":"54debbc02c07d2b64468da86ad623fa07baf4ffdd69a793148640178ff56d3b3"} Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.959407 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvnx" event={"ID":"304240af-a6fc-4b3b-b99c-b494bfcf0c3e","Type":"ContainerStarted","Data":"236d1114842d2c8b2413c7738f475a505d2cf8ddb1751827259111d07dc426cd"} Dec 09 12:12:36 crc kubenswrapper[4970]: I1209 12:12:36.959489 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvnx" event={"ID":"304240af-a6fc-4b3b-b99c-b494bfcf0c3e","Type":"ContainerStarted","Data":"65231b67831748e0a2e39d7f1769c7a758c663fbfbbfaefe0ac2e4a00ea1ca27"} Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.921645 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wsmtp"] Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.923288 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.930186 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.934865 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsmtp"] Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.965794 4970 generic.go:334] "Generic (PLEG): container finished" podID="304240af-a6fc-4b3b-b99c-b494bfcf0c3e" containerID="236d1114842d2c8b2413c7738f475a505d2cf8ddb1751827259111d07dc426cd" exitCode=0 Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.965854 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvnx" event={"ID":"304240af-a6fc-4b3b-b99c-b494bfcf0c3e","Type":"ContainerDied","Data":"236d1114842d2c8b2413c7738f475a505d2cf8ddb1751827259111d07dc426cd"} Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.967619 4970 generic.go:334] "Generic (PLEG): container finished" podID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerID="bbdc27be80790bf191d98488876249f84e7a3f2e092bd5ce02a8391d5f0a7126" exitCode=0 Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.967905 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerDied","Data":"bbdc27be80790bf191d98488876249f84e7a3f2e092bd5ce02a8391d5f0a7126"} Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.973096 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_6a0407cf-adba-4186-83b2-a32df6b4fc82/prometheus/1.log" Dec 09 12:12:37 crc kubenswrapper[4970]: I1209 12:12:37.977320 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"6a0407cf-adba-4186-83b2-a32df6b4fc82","Type":"ContainerStarted","Data":"802cac010e64e124171bdcd61bd25157515d54eee3c038296913d810d88b8063"} Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.040541 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=23.59206356 podStartE2EDuration="31.040519762s" podCreationTimestamp="2025-12-09 12:12:07 +0000 UTC" firstStartedPulling="2025-12-09 12:12:08.324019054 +0000 UTC m=+340.884500115" lastFinishedPulling="2025-12-09 12:12:15.772475256 +0000 UTC m=+348.332956317" observedRunningTime="2025-12-09 12:12:38.035162442 +0000 UTC m=+370.595643493" watchObservedRunningTime="2025-12-09 12:12:38.040519762 +0000 UTC m=+370.601000833" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.056291 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-catalog-content\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.056411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzp5w\" (UniqueName: \"kubernetes.io/projected/eb741795-9a51-4f64-9519-b857c46d3c1d-kube-api-access-vzp5w\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.056473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-utilities\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.123667 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xp4tp"] Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.125049 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.128992 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.139958 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp4tp"] Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.158293 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzp5w\" (UniqueName: \"kubernetes.io/projected/eb741795-9a51-4f64-9519-b857c46d3c1d-kube-api-access-vzp5w\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.158357 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-utilities\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.158443 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-catalog-content\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.158891 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-catalog-content\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.159228 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-utilities\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.175970 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzp5w\" (UniqueName: \"kubernetes.io/projected/eb741795-9a51-4f64-9519-b857c46d3c1d-kube-api-access-vzp5w\") pod \"certified-operators-wsmtp\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.240806 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.259973 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ace905-6970-463b-b79a-1412d6a23635-catalog-content\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.260120 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ace905-6970-463b-b79a-1412d6a23635-utilities\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.260207 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fmkb\" (UniqueName: \"kubernetes.io/projected/79ace905-6970-463b-b79a-1412d6a23635-kube-api-access-4fmkb\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.361828 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fmkb\" (UniqueName: \"kubernetes.io/projected/79ace905-6970-463b-b79a-1412d6a23635-kube-api-access-4fmkb\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.361935 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ace905-6970-463b-b79a-1412d6a23635-catalog-content\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.362035 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ace905-6970-463b-b79a-1412d6a23635-utilities\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.363068 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79ace905-6970-463b-b79a-1412d6a23635-catalog-content\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.363108 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79ace905-6970-463b-b79a-1412d6a23635-utilities\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.503323 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fmkb\" (UniqueName: \"kubernetes.io/projected/79ace905-6970-463b-b79a-1412d6a23635-kube-api-access-4fmkb\") pod \"redhat-marketplace-xp4tp\" (UID: \"79ace905-6970-463b-b79a-1412d6a23635\") " pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.747772 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.892682 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsmtp"] Dec 09 12:12:38 crc kubenswrapper[4970]: I1209 12:12:38.991693 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerStarted","Data":"f1bc8f42714f951d4c527f90a5d217d6791358344651ec82fe8393c6204a5187"} Dec 09 12:12:39 crc kubenswrapper[4970]: I1209 12:12:39.158047 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xp4tp"] Dec 09 12:12:39 crc kubenswrapper[4970]: W1209 12:12:39.190023 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79ace905_6970_463b_b79a_1412d6a23635.slice/crio-815067856cd5d7a958cd2b5dd2964ca941e63a94774879bb0e6abfb6bbb20cdc WatchSource:0}: Error finding container 815067856cd5d7a958cd2b5dd2964ca941e63a94774879bb0e6abfb6bbb20cdc: Status 404 returned error can't find the container with id 815067856cd5d7a958cd2b5dd2964ca941e63a94774879bb0e6abfb6bbb20cdc Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.000117 4970 generic.go:334] "Generic (PLEG): container finished" podID="79ace905-6970-463b-b79a-1412d6a23635" containerID="45f49b39ea154c471eb22541dbedacc60084dabc903f9f92a500e737c21ec663" exitCode=0 Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.000183 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp4tp" event={"ID":"79ace905-6970-463b-b79a-1412d6a23635","Type":"ContainerDied","Data":"45f49b39ea154c471eb22541dbedacc60084dabc903f9f92a500e737c21ec663"} Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.000239 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp4tp" event={"ID":"79ace905-6970-463b-b79a-1412d6a23635","Type":"ContainerStarted","Data":"815067856cd5d7a958cd2b5dd2964ca941e63a94774879bb0e6abfb6bbb20cdc"} Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.003485 4970 generic.go:334] "Generic (PLEG): container finished" podID="304240af-a6fc-4b3b-b99c-b494bfcf0c3e" containerID="716b774b0ea9b9497f4b2246694bbcc0955f7f5725a490d759fefae4fb1976dd" exitCode=0 Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.003593 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvnx" event={"ID":"304240af-a6fc-4b3b-b99c-b494bfcf0c3e","Type":"ContainerDied","Data":"716b774b0ea9b9497f4b2246694bbcc0955f7f5725a490d759fefae4fb1976dd"} Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.005957 4970 generic.go:334] "Generic (PLEG): container finished" podID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerID="c2e2c7599a5a2db423fd6ad7c8aac977a0bc76acd7bccd621cfd11890a57af96" exitCode=0 Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.006013 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerDied","Data":"c2e2c7599a5a2db423fd6ad7c8aac977a0bc76acd7bccd621cfd11890a57af96"} Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.008768 4970 generic.go:334] "Generic (PLEG): container finished" podID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerID="b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68" exitCode=0 Dec 09 12:12:40 crc kubenswrapper[4970]: I1209 12:12:40.008823 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerDied","Data":"b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68"} Dec 09 12:12:42 crc kubenswrapper[4970]: I1209 12:12:42.728927 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:12:43 crc kubenswrapper[4970]: I1209 12:12:43.027384 4970 generic.go:334] "Generic (PLEG): container finished" podID="79ace905-6970-463b-b79a-1412d6a23635" containerID="0937a315b3ee46f381ad0a64288ea79aaa7bd3762d5e42f897e44e806a79cf7e" exitCode=0 Dec 09 12:12:43 crc kubenswrapper[4970]: I1209 12:12:43.027471 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp4tp" event={"ID":"79ace905-6970-463b-b79a-1412d6a23635","Type":"ContainerDied","Data":"0937a315b3ee46f381ad0a64288ea79aaa7bd3762d5e42f897e44e806a79cf7e"} Dec 09 12:12:43 crc kubenswrapper[4970]: I1209 12:12:43.032085 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerStarted","Data":"29f229ab4e6af488406c03b33968662986759f2b77dc29446d3acc4b0fb4cb4b"} Dec 09 12:12:43 crc kubenswrapper[4970]: I1209 12:12:43.046518 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerStarted","Data":"621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b"} Dec 09 12:12:43 crc kubenswrapper[4970]: I1209 12:12:43.068363 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ctsvn" podStartSLOduration=3.73712623 podStartE2EDuration="8.068341816s" podCreationTimestamp="2025-12-09 12:12:35 +0000 UTC" firstStartedPulling="2025-12-09 12:12:37.97261393 +0000 UTC m=+370.533094981" lastFinishedPulling="2025-12-09 12:12:42.303829516 +0000 UTC m=+374.864310567" observedRunningTime="2025-12-09 12:12:43.068268454 +0000 UTC m=+375.628749525" watchObservedRunningTime="2025-12-09 12:12:43.068341816 +0000 UTC m=+375.628822867" Dec 09 12:12:44 crc kubenswrapper[4970]: I1209 12:12:44.054765 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xp4tp" event={"ID":"79ace905-6970-463b-b79a-1412d6a23635","Type":"ContainerStarted","Data":"4a18d5509edfebbe6cc300d6547549e27e5badbc32bbdac35640b12757561cf8"} Dec 09 12:12:44 crc kubenswrapper[4970]: I1209 12:12:44.056996 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmvnx" event={"ID":"304240af-a6fc-4b3b-b99c-b494bfcf0c3e","Type":"ContainerStarted","Data":"78dcc4d5e378d865cf51b8d938774068ab32ad19f83fea32d0094efe3abcaa3e"} Dec 09 12:12:44 crc kubenswrapper[4970]: I1209 12:12:44.058869 4970 generic.go:334] "Generic (PLEG): container finished" podID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerID="621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b" exitCode=0 Dec 09 12:12:44 crc kubenswrapper[4970]: I1209 12:12:44.058903 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerDied","Data":"621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b"} Dec 09 12:12:44 crc kubenswrapper[4970]: I1209 12:12:44.106761 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xp4tp" podStartSLOduration=2.630957097 podStartE2EDuration="6.106744897s" podCreationTimestamp="2025-12-09 12:12:38 +0000 UTC" firstStartedPulling="2025-12-09 12:12:40.001639323 +0000 UTC m=+372.562120374" lastFinishedPulling="2025-12-09 12:12:43.477427123 +0000 UTC m=+376.037908174" observedRunningTime="2025-12-09 12:12:44.081403107 +0000 UTC m=+376.641884158" watchObservedRunningTime="2025-12-09 12:12:44.106744897 +0000 UTC m=+376.667225948" Dec 09 12:12:44 crc kubenswrapper[4970]: I1209 12:12:44.122044 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bmvnx" podStartSLOduration=4.046493505 podStartE2EDuration="9.122026855s" podCreationTimestamp="2025-12-09 12:12:35 +0000 UTC" firstStartedPulling="2025-12-09 12:12:37.969275197 +0000 UTC m=+370.529756248" lastFinishedPulling="2025-12-09 12:12:43.044808547 +0000 UTC m=+375.605289598" observedRunningTime="2025-12-09 12:12:44.118233119 +0000 UTC m=+376.678714160" watchObservedRunningTime="2025-12-09 12:12:44.122026855 +0000 UTC m=+376.682507906" Dec 09 12:12:45 crc kubenswrapper[4970]: I1209 12:12:45.067546 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerStarted","Data":"3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4"} Dec 09 12:12:45 crc kubenswrapper[4970]: I1209 12:12:45.856375 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:45 crc kubenswrapper[4970]: I1209 12:12:45.856960 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.011522 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.011582 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.048420 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.048585 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.091092 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.096902 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wsmtp" podStartSLOduration=4.577470235 podStartE2EDuration="9.09688831s" podCreationTimestamp="2025-12-09 12:12:37 +0000 UTC" firstStartedPulling="2025-12-09 12:12:40.010945034 +0000 UTC m=+372.571426075" lastFinishedPulling="2025-12-09 12:12:44.530363099 +0000 UTC m=+377.090844150" observedRunningTime="2025-12-09 12:12:46.0922289 +0000 UTC m=+378.652709951" watchObservedRunningTime="2025-12-09 12:12:46.09688831 +0000 UTC m=+378.657369361" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.713841 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.717518 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:12:46 crc kubenswrapper[4970]: I1209 12:12:46.902915 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ctsvn" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="registry-server" probeResult="failure" output=< Dec 09 12:12:46 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:12:46 crc kubenswrapper[4970]: > Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.143731 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bmvnx" Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.241647 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.241708 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.283540 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.749016 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.749370 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:48 crc kubenswrapper[4970]: I1209 12:12:48.784808 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:49 crc kubenswrapper[4970]: I1209 12:12:49.128860 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 12:12:49 crc kubenswrapper[4970]: I1209 12:12:49.131720 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xp4tp" Dec 09 12:12:53 crc kubenswrapper[4970]: I1209 12:12:53.952162 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs"] Dec 09 12:12:53 crc kubenswrapper[4970]: I1209 12:12:53.953052 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" podUID="a11a45bf-b936-463c-ad9c-71bcac4e4532" containerName="controller-manager" containerID="cri-o://bfac030b4128f31330f1dec4c67fd068b2afd0129d8dfde88cedb6142c065f00" gracePeriod=30 Dec 09 12:12:55 crc kubenswrapper[4970]: I1209 12:12:55.068799 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-wqfw9" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" containerID="cri-o://ff9de0745f4465774e25638339a118c3b4be7fce5491f46e415550104ae05898" gracePeriod=15 Dec 09 12:12:55 crc kubenswrapper[4970]: I1209 12:12:55.894540 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:55 crc kubenswrapper[4970]: I1209 12:12:55.941813 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.135768 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wqfw9_2537d42d-31de-48b8-ae7a-afaba0d36376/console/0.log" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.135810 4970 generic.go:334] "Generic (PLEG): container finished" podID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerID="ff9de0745f4465774e25638339a118c3b4be7fce5491f46e415550104ae05898" exitCode=2 Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.135868 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wqfw9" event={"ID":"2537d42d-31de-48b8-ae7a-afaba0d36376","Type":"ContainerDied","Data":"ff9de0745f4465774e25638339a118c3b4be7fce5491f46e415550104ae05898"} Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.137925 4970 generic.go:334] "Generic (PLEG): container finished" podID="a11a45bf-b936-463c-ad9c-71bcac4e4532" containerID="bfac030b4128f31330f1dec4c67fd068b2afd0129d8dfde88cedb6142c065f00" exitCode=0 Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.137983 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" event={"ID":"a11a45bf-b936-463c-ad9c-71bcac4e4532","Type":"ContainerDied","Data":"bfac030b4128f31330f1dec4c67fd068b2afd0129d8dfde88cedb6142c065f00"} Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.198493 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" podUID="051b0190-5f8c-42e4-af6c-8a5dd401ef52" containerName="registry" containerID="cri-o://f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272" gracePeriod=30 Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.236681 4970 patch_prober.go:28] interesting pod/console-f9d7485db-wqfw9 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.236731 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-wqfw9" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" probeResult="failure" output="Get \"https://10.217.0.37:8443/health\": dial tcp 10.217.0.37:8443: connect: connection refused" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.382298 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.430718 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68b64b9b9-sff74"] Dec 09 12:12:56 crc kubenswrapper[4970]: E1209 12:12:56.431498 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a11a45bf-b936-463c-ad9c-71bcac4e4532" containerName="controller-manager" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.431555 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a11a45bf-b936-463c-ad9c-71bcac4e4532" containerName="controller-manager" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.431785 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a11a45bf-b936-463c-ad9c-71bcac4e4532" containerName="controller-manager" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.432458 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.435926 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68b64b9b9-sff74"] Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.443448 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-client-ca\") pod \"a11a45bf-b936-463c-ad9c-71bcac4e4532\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.443488 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a11a45bf-b936-463c-ad9c-71bcac4e4532-serving-cert\") pod \"a11a45bf-b936-463c-ad9c-71bcac4e4532\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.443557 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsf44\" (UniqueName: \"kubernetes.io/projected/a11a45bf-b936-463c-ad9c-71bcac4e4532-kube-api-access-zsf44\") pod \"a11a45bf-b936-463c-ad9c-71bcac4e4532\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.443628 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-config\") pod \"a11a45bf-b936-463c-ad9c-71bcac4e4532\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.443652 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-proxy-ca-bundles\") pod \"a11a45bf-b936-463c-ad9c-71bcac4e4532\" (UID: \"a11a45bf-b936-463c-ad9c-71bcac4e4532\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.444657 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-config" (OuterVolumeSpecName: "config") pod "a11a45bf-b936-463c-ad9c-71bcac4e4532" (UID: "a11a45bf-b936-463c-ad9c-71bcac4e4532"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.444935 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a11a45bf-b936-463c-ad9c-71bcac4e4532" (UID: "a11a45bf-b936-463c-ad9c-71bcac4e4532"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.445280 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-client-ca" (OuterVolumeSpecName: "client-ca") pod "a11a45bf-b936-463c-ad9c-71bcac4e4532" (UID: "a11a45bf-b936-463c-ad9c-71bcac4e4532"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.455727 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a11a45bf-b936-463c-ad9c-71bcac4e4532-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a11a45bf-b936-463c-ad9c-71bcac4e4532" (UID: "a11a45bf-b936-463c-ad9c-71bcac4e4532"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.463476 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a11a45bf-b936-463c-ad9c-71bcac4e4532-kube-api-access-zsf44" (OuterVolumeSpecName: "kube-api-access-zsf44") pod "a11a45bf-b936-463c-ad9c-71bcac4e4532" (UID: "a11a45bf-b936-463c-ad9c-71bcac4e4532"). InnerVolumeSpecName "kube-api-access-zsf44". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.545885 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/351f3c31-3804-4c08-932e-7284a28d6397-serving-cert\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.545991 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-proxy-ca-bundles\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546044 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-config\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546101 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwth9\" (UniqueName: \"kubernetes.io/projected/351f3c31-3804-4c08-932e-7284a28d6397-kube-api-access-rwth9\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546144 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-client-ca\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546239 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546281 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a11a45bf-b936-463c-ad9c-71bcac4e4532-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546302 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsf44\" (UniqueName: \"kubernetes.io/projected/a11a45bf-b936-463c-ad9c-71bcac4e4532-kube-api-access-zsf44\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546314 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.546325 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a11a45bf-b936-463c-ad9c-71bcac4e4532-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.647234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-proxy-ca-bundles\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.647304 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-config\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.647345 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwth9\" (UniqueName: \"kubernetes.io/projected/351f3c31-3804-4c08-932e-7284a28d6397-kube-api-access-rwth9\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.647371 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-client-ca\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.647408 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/351f3c31-3804-4c08-932e-7284a28d6397-serving-cert\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.651397 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-proxy-ca-bundles\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.651392 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-config\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.652176 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/351f3c31-3804-4c08-932e-7284a28d6397-client-ca\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.652191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/351f3c31-3804-4c08-932e-7284a28d6397-serving-cert\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.677106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwth9\" (UniqueName: \"kubernetes.io/projected/351f3c31-3804-4c08-932e-7284a28d6397-kube-api-access-rwth9\") pod \"controller-manager-68b64b9b9-sff74\" (UID: \"351f3c31-3804-4c08-932e-7284a28d6397\") " pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.731340 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wqfw9_2537d42d-31de-48b8-ae7a-afaba0d36376/console/0.log" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.731419 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.751679 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.753450 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.853695 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-trusted-ca-bundle\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.853739 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-oauth-config\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.853776 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/051b0190-5f8c-42e4-af6c-8a5dd401ef52-ca-trust-extracted\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.853907 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.853948 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-bound-sa-token\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.853979 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-certificates\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854048 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-serving-cert\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854074 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-oauth-serving-cert\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854112 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-trusted-ca\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854144 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68hvb\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-kube-api-access-68hvb\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854201 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-service-ca\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854245 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-tls\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854301 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/051b0190-5f8c-42e4-af6c-8a5dd401ef52-installation-pull-secrets\") pod \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\" (UID: \"051b0190-5f8c-42e4-af6c-8a5dd401ef52\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854323 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt5kb\" (UniqueName: \"kubernetes.io/projected/2537d42d-31de-48b8-ae7a-afaba0d36376-kube-api-access-dt5kb\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.854347 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-console-config\") pod \"2537d42d-31de-48b8-ae7a-afaba0d36376\" (UID: \"2537d42d-31de-48b8-ae7a-afaba0d36376\") " Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.856047 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.859345 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-service-ca" (OuterVolumeSpecName: "service-ca") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.859823 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.860159 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.860674 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.861302 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.863825 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2537d42d-31de-48b8-ae7a-afaba0d36376-kube-api-access-dt5kb" (OuterVolumeSpecName: "kube-api-access-dt5kb") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "kube-api-access-dt5kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.866984 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/051b0190-5f8c-42e4-af6c-8a5dd401ef52-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.869845 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.870681 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-console-config" (OuterVolumeSpecName: "console-config") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.871085 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2537d42d-31de-48b8-ae7a-afaba0d36376" (UID: "2537d42d-31de-48b8-ae7a-afaba0d36376"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.871202 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.871824 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.875667 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-kube-api-access-68hvb" (OuterVolumeSpecName: "kube-api-access-68hvb") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "kube-api-access-68hvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.887171 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051b0190-5f8c-42e4-af6c-8a5dd401ef52-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "051b0190-5f8c-42e4-af6c-8a5dd401ef52" (UID: "051b0190-5f8c-42e4-af6c-8a5dd401ef52"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956013 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956039 4970 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956053 4970 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/051b0190-5f8c-42e4-af6c-8a5dd401ef52-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956062 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt5kb\" (UniqueName: \"kubernetes.io/projected/2537d42d-31de-48b8-ae7a-afaba0d36376-kube-api-access-dt5kb\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956071 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956080 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956088 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956098 4970 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/051b0190-5f8c-42e4-af6c-8a5dd401ef52-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956106 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956114 4970 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956122 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2537d42d-31de-48b8-ae7a-afaba0d36376-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956130 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2537d42d-31de-48b8-ae7a-afaba0d36376-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956137 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/051b0190-5f8c-42e4-af6c-8a5dd401ef52-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:56 crc kubenswrapper[4970]: I1209 12:12:56.956145 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68hvb\" (UniqueName: \"kubernetes.io/projected/051b0190-5f8c-42e4-af6c-8a5dd401ef52-kube-api-access-68hvb\") on node \"crc\" DevicePath \"\"" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.145352 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" event={"ID":"a11a45bf-b936-463c-ad9c-71bcac4e4532","Type":"ContainerDied","Data":"89c2812841bfa2f298ec1bb4f8597eca0c57bbe162564a82cb2f019fbba944bf"} Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.145703 4970 scope.go:117] "RemoveContainer" containerID="bfac030b4128f31330f1dec4c67fd068b2afd0129d8dfde88cedb6142c065f00" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.145389 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.152934 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-wqfw9_2537d42d-31de-48b8-ae7a-afaba0d36376/console/0.log" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.153021 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-wqfw9" event={"ID":"2537d42d-31de-48b8-ae7a-afaba0d36376","Type":"ContainerDied","Data":"521c288039981de5150a4092d6996acd0c698172623d8166ede6d81b01923248"} Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.153310 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-wqfw9" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.157184 4970 generic.go:334] "Generic (PLEG): container finished" podID="051b0190-5f8c-42e4-af6c-8a5dd401ef52" containerID="f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272" exitCode=0 Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.157261 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.157237 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" event={"ID":"051b0190-5f8c-42e4-af6c-8a5dd401ef52","Type":"ContainerDied","Data":"f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272"} Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.157369 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-lv8jl" event={"ID":"051b0190-5f8c-42e4-af6c-8a5dd401ef52","Type":"ContainerDied","Data":"e182004abffec9e64507483cdcb7eab5f6d950ddf2979a705a4fd28a705b1980"} Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.173879 4970 scope.go:117] "RemoveContainer" containerID="ff9de0745f4465774e25638339a118c3b4be7fce5491f46e415550104ae05898" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.185477 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.191137 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6b565b4bcd-p6jfs"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.204744 4970 scope.go:117] "RemoveContainer" containerID="f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.222952 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv8jl"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.228899 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-lv8jl"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.242821 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-wqfw9"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.246985 4970 scope.go:117] "RemoveContainer" containerID="f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272" Dec 09 12:12:57 crc kubenswrapper[4970]: E1209 12:12:57.247481 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272\": container with ID starting with f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272 not found: ID does not exist" containerID="f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.247508 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272"} err="failed to get container status \"f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272\": rpc error: code = NotFound desc = could not find container \"f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272\": container with ID starting with f2c499dd3e6e848e67469773486dfac234fc9267d4aee51506c04e5f807e0272 not found: ID does not exist" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.253899 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-wqfw9"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.365328 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68b64b9b9-sff74"] Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.819472 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051b0190-5f8c-42e4-af6c-8a5dd401ef52" path="/var/lib/kubelet/pods/051b0190-5f8c-42e4-af6c-8a5dd401ef52/volumes" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.820547 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" path="/var/lib/kubelet/pods/2537d42d-31de-48b8-ae7a-afaba0d36376/volumes" Dec 09 12:12:57 crc kubenswrapper[4970]: I1209 12:12:57.821207 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a11a45bf-b936-463c-ad9c-71bcac4e4532" path="/var/lib/kubelet/pods/a11a45bf-b936-463c-ad9c-71bcac4e4532/volumes" Dec 09 12:12:58 crc kubenswrapper[4970]: I1209 12:12:58.175636 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" event={"ID":"351f3c31-3804-4c08-932e-7284a28d6397","Type":"ContainerStarted","Data":"e96a2f8fc4a67e2e1f109138030cd9df16e13db7644c88291435363716436859"} Dec 09 12:12:59 crc kubenswrapper[4970]: I1209 12:12:59.185214 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" event={"ID":"351f3c31-3804-4c08-932e-7284a28d6397","Type":"ContainerStarted","Data":"de34a280ad289ef4c154d1581975eada84d644a8c7e6c029ad11e8a20a681599"} Dec 09 12:12:59 crc kubenswrapper[4970]: I1209 12:12:59.185474 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:59 crc kubenswrapper[4970]: I1209 12:12:59.190810 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" Dec 09 12:12:59 crc kubenswrapper[4970]: I1209 12:12:59.226347 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68b64b9b9-sff74" podStartSLOduration=6.226325179 podStartE2EDuration="6.226325179s" podCreationTimestamp="2025-12-09 12:12:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:12:59.206897475 +0000 UTC m=+391.767378526" watchObservedRunningTime="2025-12-09 12:12:59.226325179 +0000 UTC m=+391.786806230" Dec 09 12:13:07 crc kubenswrapper[4970]: I1209 12:13:07.729529 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:13:07 crc kubenswrapper[4970]: I1209 12:13:07.758525 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:13:08 crc kubenswrapper[4970]: I1209 12:13:08.270295 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.010990 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.011760 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.011828 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.012778 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ceaaeecb8c51f67adfcf9a5e80757db7376be95abcbbe814f7fcd0b58bf39bf2"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.012872 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://ceaaeecb8c51f67adfcf9a5e80757db7376be95abcbbe814f7fcd0b58bf39bf2" gracePeriod=600 Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.302458 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="ceaaeecb8c51f67adfcf9a5e80757db7376be95abcbbe814f7fcd0b58bf39bf2" exitCode=0 Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.302533 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"ceaaeecb8c51f67adfcf9a5e80757db7376be95abcbbe814f7fcd0b58bf39bf2"} Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.302609 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"414e7804f598b3aefedf34c5b0fdde4ed9406e7d4b0a5b8d5d6b44c8067d40b9"} Dec 09 12:13:16 crc kubenswrapper[4970]: I1209 12:13:16.302636 4970 scope.go:117] "RemoveContainer" containerID="44e1cad556f69de186f04494224fdf6a124eac70716bd1838622a640d8045baf" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.412209 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67d97ddd48-7hj75"] Dec 09 12:13:19 crc kubenswrapper[4970]: E1209 12:13:19.412969 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051b0190-5f8c-42e4-af6c-8a5dd401ef52" containerName="registry" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.412984 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="051b0190-5f8c-42e4-af6c-8a5dd401ef52" containerName="registry" Dec 09 12:13:19 crc kubenswrapper[4970]: E1209 12:13:19.412995 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.413000 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.413093 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2537d42d-31de-48b8-ae7a-afaba0d36376" containerName="console" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.413113 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="051b0190-5f8c-42e4-af6c-8a5dd401ef52" containerName="registry" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.413518 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.426760 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67d97ddd48-7hj75"] Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489395 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-oauth-serving-cert\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489451 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlgd\" (UniqueName: \"kubernetes.io/projected/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-kube-api-access-9rlgd\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489494 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-trusted-ca-bundle\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489514 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-service-ca\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489606 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-serving-cert\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489771 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-config\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.489838 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-oauth-config\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591631 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-trusted-ca-bundle\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591691 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-service-ca\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591742 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-serving-cert\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-config\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591809 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-oauth-config\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591844 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-oauth-serving-cert\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.591883 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlgd\" (UniqueName: \"kubernetes.io/projected/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-kube-api-access-9rlgd\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.592849 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-service-ca\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.592993 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-config\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.593017 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-oauth-serving-cert\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.593030 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-trusted-ca-bundle\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.599905 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-oauth-config\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.606729 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-serving-cert\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.611221 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlgd\" (UniqueName: \"kubernetes.io/projected/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-kube-api-access-9rlgd\") pod \"console-67d97ddd48-7hj75\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:19 crc kubenswrapper[4970]: I1209 12:13:19.728299 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:20 crc kubenswrapper[4970]: I1209 12:13:20.130305 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67d97ddd48-7hj75"] Dec 09 12:13:20 crc kubenswrapper[4970]: I1209 12:13:20.330999 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d97ddd48-7hj75" event={"ID":"0a80cf01-048b-4637-b6a4-eb88d9e6ecae","Type":"ContainerStarted","Data":"5d9f29e2bbc30ea25f701ab62a7ec1761148162c9022cfdb15d8755174618774"} Dec 09 12:13:20 crc kubenswrapper[4970]: I1209 12:13:20.331428 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d97ddd48-7hj75" event={"ID":"0a80cf01-048b-4637-b6a4-eb88d9e6ecae","Type":"ContainerStarted","Data":"6fcf01f2cf17fa92af031c5788fc173e96704ca38cd91d554b456a23f4ae0c85"} Dec 09 12:13:20 crc kubenswrapper[4970]: I1209 12:13:20.357096 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67d97ddd48-7hj75" podStartSLOduration=1.357077668 podStartE2EDuration="1.357077668s" podCreationTimestamp="2025-12-09 12:13:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:13:20.352531643 +0000 UTC m=+412.913012714" watchObservedRunningTime="2025-12-09 12:13:20.357077668 +0000 UTC m=+412.917558719" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.424808 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-7fbd999b89-zl5q9"] Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.426796 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.428847 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-64c6668cd-k7dhn"] Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.429040 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" podUID="088add4e-94cf-435b-8190-75d69aa580ea" containerName="metrics-server" containerID="cri-o://35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d" gracePeriod=170 Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.444305 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7fbd999b89-zl5q9"] Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498177 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlw2b\" (UniqueName: \"kubernetes.io/projected/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-kube-api-access-wlw2b\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498230 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-audit-log\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498272 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-client-ca-bundle\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498294 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-secret-metrics-server-tls\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498352 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-secret-metrics-client-certs\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498405 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.498445 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-metrics-server-audit-profiles\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.599954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlw2b\" (UniqueName: \"kubernetes.io/projected/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-kube-api-access-wlw2b\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600030 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-audit-log\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600059 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-client-ca-bundle\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600076 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-secret-metrics-server-tls\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600110 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-secret-metrics-client-certs\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600145 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600172 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-metrics-server-audit-profiles\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.600946 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-audit-log\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.601781 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-metrics-server-audit-profiles\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.601930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.606052 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-secret-metrics-server-tls\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.606492 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-client-ca-bundle\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.607712 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-secret-metrics-client-certs\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.619134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlw2b\" (UniqueName: \"kubernetes.io/projected/75a7d22c-0a6d-471f-a8b3-d403e4cd4009-kube-api-access-wlw2b\") pod \"metrics-server-7fbd999b89-zl5q9\" (UID: \"75a7d22c-0a6d-471f-a8b3-d403e4cd4009\") " pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:25 crc kubenswrapper[4970]: I1209 12:13:25.748916 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.159792 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-7fbd999b89-zl5q9"] Dec 09 12:13:26 crc kubenswrapper[4970]: W1209 12:13:26.169127 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a7d22c_0a6d_471f_a8b3_d403e4cd4009.slice/crio-12de2b8c8ae1e67a43fa1a45c7eea93e9def6529a80e7103d5a71353599d3e63 WatchSource:0}: Error finding container 12de2b8c8ae1e67a43fa1a45c7eea93e9def6529a80e7103d5a71353599d3e63: Status 404 returned error can't find the container with id 12de2b8c8ae1e67a43fa1a45c7eea93e9def6529a80e7103d5a71353599d3e63 Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.356142 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj"] Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.356947 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.362806 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/monitoring-plugin-56499464c7-2qkcl"] Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.363061 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" podUID="f67d07f0-b9f1-4704-be66-9c152defe602" containerName="monitoring-plugin" containerID="cri-o://e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4" gracePeriod=30 Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.371157 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj"] Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.377478 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" event={"ID":"75a7d22c-0a6d-471f-a8b3-d403e4cd4009","Type":"ContainerStarted","Data":"43d6d1da9f4ee0156cb414e992ae624e457e37b6d75d2ea6a728c1409231af6c"} Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.377520 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" event={"ID":"75a7d22c-0a6d-471f-a8b3-d403e4cd4009","Type":"ContainerStarted","Data":"12de2b8c8ae1e67a43fa1a45c7eea93e9def6529a80e7103d5a71353599d3e63"} Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.411882 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/429777c5-55fc-4f77-9f93-fd9fc699c67b-monitoring-plugin-cert\") pod \"monitoring-plugin-bd89f6cf8-jbpzj\" (UID: \"429777c5-55fc-4f77-9f93-fd9fc699c67b\") " pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.421834 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" podStartSLOduration=1.42181708 podStartE2EDuration="1.42181708s" podCreationTimestamp="2025-12-09 12:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:13:26.415498936 +0000 UTC m=+418.975979987" watchObservedRunningTime="2025-12-09 12:13:26.42181708 +0000 UTC m=+418.982298131" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.513531 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/429777c5-55fc-4f77-9f93-fd9fc699c67b-monitoring-plugin-cert\") pod \"monitoring-plugin-bd89f6cf8-jbpzj\" (UID: \"429777c5-55fc-4f77-9f93-fd9fc699c67b\") " pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.519304 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/429777c5-55fc-4f77-9f93-fd9fc699c67b-monitoring-plugin-cert\") pod \"monitoring-plugin-bd89f6cf8-jbpzj\" (UID: \"429777c5-55fc-4f77-9f93-fd9fc699c67b\") " pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.672694 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.772572 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-56499464c7-2qkcl_f67d07f0-b9f1-4704-be66-9c152defe602/monitoring-plugin/0.log" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.772646 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.824677 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f67d07f0-b9f1-4704-be66-9c152defe602-monitoring-plugin-cert\") pod \"f67d07f0-b9f1-4704-be66-9c152defe602\" (UID: \"f67d07f0-b9f1-4704-be66-9c152defe602\") " Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.830118 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67d07f0-b9f1-4704-be66-9c152defe602-monitoring-plugin-cert" (OuterVolumeSpecName: "monitoring-plugin-cert") pod "f67d07f0-b9f1-4704-be66-9c152defe602" (UID: "f67d07f0-b9f1-4704-be66-9c152defe602"). InnerVolumeSpecName "monitoring-plugin-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:13:26 crc kubenswrapper[4970]: I1209 12:13:26.925914 4970 reconciler_common.go:293] "Volume detached for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/f67d07f0-b9f1-4704-be66-9c152defe602-monitoring-plugin-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.088880 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj"] Dec 09 12:13:27 crc kubenswrapper[4970]: W1209 12:13:27.094304 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod429777c5_55fc_4f77_9f93_fd9fc699c67b.slice/crio-8900b936d53583efca9797ec0c28b4f104230af11c374a496d1241da0ee57dad WatchSource:0}: Error finding container 8900b936d53583efca9797ec0c28b4f104230af11c374a496d1241da0ee57dad: Status 404 returned error can't find the container with id 8900b936d53583efca9797ec0c28b4f104230af11c374a496d1241da0ee57dad Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.399010 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-56499464c7-2qkcl_f67d07f0-b9f1-4704-be66-9c152defe602/monitoring-plugin/0.log" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.399349 4970 generic.go:334] "Generic (PLEG): container finished" podID="f67d07f0-b9f1-4704-be66-9c152defe602" containerID="e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4" exitCode=2 Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.399495 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" event={"ID":"f67d07f0-b9f1-4704-be66-9c152defe602","Type":"ContainerDied","Data":"e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4"} Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.399526 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" event={"ID":"f67d07f0-b9f1-4704-be66-9c152defe602","Type":"ContainerDied","Data":"87d76ae2b1c110fea066801e4a83ab860c32962dd9842666f4e2e940579e0ac5"} Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.399552 4970 scope.go:117] "RemoveContainer" containerID="e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.399676 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-56499464c7-2qkcl" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.404383 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" event={"ID":"429777c5-55fc-4f77-9f93-fd9fc699c67b","Type":"ContainerStarted","Data":"492f3a75fd1fb9cf384d762264a0e2266bc0f0fe674e33b18bf793306394cc24"} Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.404438 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" event={"ID":"429777c5-55fc-4f77-9f93-fd9fc699c67b","Type":"ContainerStarted","Data":"8900b936d53583efca9797ec0c28b4f104230af11c374a496d1241da0ee57dad"} Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.404688 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.411135 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.426157 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-bd89f6cf8-jbpzj" podStartSLOduration=1.426132856 podStartE2EDuration="1.426132856s" podCreationTimestamp="2025-12-09 12:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:13:27.418654219 +0000 UTC m=+419.979135280" watchObservedRunningTime="2025-12-09 12:13:27.426132856 +0000 UTC m=+419.986613917" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.432132 4970 scope.go:117] "RemoveContainer" containerID="e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4" Dec 09 12:13:27 crc kubenswrapper[4970]: E1209 12:13:27.432725 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4\": container with ID starting with e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4 not found: ID does not exist" containerID="e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.432774 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4"} err="failed to get container status \"e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4\": rpc error: code = NotFound desc = could not find container \"e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4\": container with ID starting with e1257d7688c5d534d44c38c0d74084e32a8629acc4f06b6395ce2b47e5ee4ff4 not found: ID does not exist" Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.458123 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/monitoring-plugin-56499464c7-2qkcl"] Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.464155 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/monitoring-plugin-56499464c7-2qkcl"] Dec 09 12:13:27 crc kubenswrapper[4970]: I1209 12:13:27.822567 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67d07f0-b9f1-4704-be66-9c152defe602" path="/var/lib/kubelet/pods/f67d07f0-b9f1-4704-be66-9c152defe602/volumes" Dec 09 12:13:29 crc kubenswrapper[4970]: I1209 12:13:29.729355 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:29 crc kubenswrapper[4970]: I1209 12:13:29.729712 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:29 crc kubenswrapper[4970]: I1209 12:13:29.742190 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:30 crc kubenswrapper[4970]: I1209 12:13:30.429887 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:13:30 crc kubenswrapper[4970]: I1209 12:13:30.491067 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d95d9c75f-6pp2x"] Dec 09 12:13:45 crc kubenswrapper[4970]: I1209 12:13:45.749629 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:45 crc kubenswrapper[4970]: I1209 12:13:45.750531 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:13:55 crc kubenswrapper[4970]: I1209 12:13:55.547109 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5d95d9c75f-6pp2x" podUID="32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" containerName="console" containerID="cri-o://fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032" gracePeriod=15 Dec 09 12:13:55 crc kubenswrapper[4970]: I1209 12:13:55.954764 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d95d9c75f-6pp2x_32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7/console/0.log" Dec 09 12:13:55 crc kubenswrapper[4970]: I1209 12:13:55.955121 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065573 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-oauth-serving-cert\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065644 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-service-ca\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065717 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-serving-cert\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065802 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wncd5\" (UniqueName: \"kubernetes.io/projected/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-kube-api-access-wncd5\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065836 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-oauth-config\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065921 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-trusted-ca-bundle\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.065949 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-config\") pod \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\" (UID: \"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7\") " Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.066565 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.066606 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-service-ca" (OuterVolumeSpecName: "service-ca") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.067065 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.067612 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-config" (OuterVolumeSpecName: "console-config") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.073854 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.073810 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.085453 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-kube-api-access-wncd5" (OuterVolumeSpecName: "kube-api-access-wncd5") pod "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" (UID: "32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7"). InnerVolumeSpecName "kube-api-access-wncd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167612 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167644 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167656 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167666 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wncd5\" (UniqueName: \"kubernetes.io/projected/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-kube-api-access-wncd5\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167680 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167689 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.167699 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.613186 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5d95d9c75f-6pp2x_32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7/console/0.log" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.613278 4970 generic.go:334] "Generic (PLEG): container finished" podID="32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" containerID="fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032" exitCode=2 Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.613311 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d95d9c75f-6pp2x" event={"ID":"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7","Type":"ContainerDied","Data":"fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032"} Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.613338 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d95d9c75f-6pp2x" event={"ID":"32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7","Type":"ContainerDied","Data":"d9b87ca13e4bd2b32d7a5f53faddf34d832ce7d0e7c46c4a5f32a9ce7079eb3b"} Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.613354 4970 scope.go:117] "RemoveContainer" containerID="fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.613423 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d95d9c75f-6pp2x" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.642239 4970 scope.go:117] "RemoveContainer" containerID="fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032" Dec 09 12:13:56 crc kubenswrapper[4970]: E1209 12:13:56.643146 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032\": container with ID starting with fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032 not found: ID does not exist" containerID="fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.643176 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032"} err="failed to get container status \"fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032\": rpc error: code = NotFound desc = could not find container \"fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032\": container with ID starting with fbc04a67ce195d64f107cb91788d7bd25cc3d1fd188fba41f5aebba435484032 not found: ID does not exist" Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.643968 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5d95d9c75f-6pp2x"] Dec 09 12:13:56 crc kubenswrapper[4970]: I1209 12:13:56.647580 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5d95d9c75f-6pp2x"] Dec 09 12:13:57 crc kubenswrapper[4970]: I1209 12:13:57.820576 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" path="/var/lib/kubelet/pods/32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7/volumes" Dec 09 12:14:05 crc kubenswrapper[4970]: I1209 12:14:05.756175 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:14:05 crc kubenswrapper[4970]: I1209 12:14:05.762124 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-7fbd999b89-zl5q9" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.175330 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58"] Dec 09 12:15:00 crc kubenswrapper[4970]: E1209 12:15:00.176213 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67d07f0-b9f1-4704-be66-9c152defe602" containerName="monitoring-plugin" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.176229 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67d07f0-b9f1-4704-be66-9c152defe602" containerName="monitoring-plugin" Dec 09 12:15:00 crc kubenswrapper[4970]: E1209 12:15:00.176288 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" containerName="console" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.176298 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" containerName="console" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.176848 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67d07f0-b9f1-4704-be66-9c152defe602" containerName="monitoring-plugin" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.176903 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d9e699-1bc0-4f10-bf8f-ddcfc98d12d7" containerName="console" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.178493 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.180797 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.180942 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.181848 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58"] Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.286689 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ab3e44-e399-4021-8cef-f80b2a17b3d8-config-volume\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.286745 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2s54\" (UniqueName: \"kubernetes.io/projected/29ab3e44-e399-4021-8cef-f80b2a17b3d8-kube-api-access-m2s54\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.286817 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29ab3e44-e399-4021-8cef-f80b2a17b3d8-secret-volume\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.388207 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ab3e44-e399-4021-8cef-f80b2a17b3d8-config-volume\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.388494 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2s54\" (UniqueName: \"kubernetes.io/projected/29ab3e44-e399-4021-8cef-f80b2a17b3d8-kube-api-access-m2s54\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.388617 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29ab3e44-e399-4021-8cef-f80b2a17b3d8-secret-volume\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.389088 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ab3e44-e399-4021-8cef-f80b2a17b3d8-config-volume\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.393745 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29ab3e44-e399-4021-8cef-f80b2a17b3d8-secret-volume\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.406377 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2s54\" (UniqueName: \"kubernetes.io/projected/29ab3e44-e399-4021-8cef-f80b2a17b3d8-kube-api-access-m2s54\") pod \"collect-profiles-29421375-nmd58\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.495099 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:00 crc kubenswrapper[4970]: I1209 12:15:00.902589 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58"] Dec 09 12:15:01 crc kubenswrapper[4970]: I1209 12:15:01.077183 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" event={"ID":"29ab3e44-e399-4021-8cef-f80b2a17b3d8","Type":"ContainerStarted","Data":"a0a040197c14cde3f620aa64ca70decfadb2e857cf881f5ce6f77c3b70b222ce"} Dec 09 12:15:02 crc kubenswrapper[4970]: I1209 12:15:02.085209 4970 generic.go:334] "Generic (PLEG): container finished" podID="29ab3e44-e399-4021-8cef-f80b2a17b3d8" containerID="a8a0d24c0d4264af47ab5a828a00ef11fd5760845ca5549ef66021d9727c6ad4" exitCode=0 Dec 09 12:15:02 crc kubenswrapper[4970]: I1209 12:15:02.085321 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" event={"ID":"29ab3e44-e399-4021-8cef-f80b2a17b3d8","Type":"ContainerDied","Data":"a8a0d24c0d4264af47ab5a828a00ef11fd5760845ca5549ef66021d9727c6ad4"} Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.311083 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.435792 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ab3e44-e399-4021-8cef-f80b2a17b3d8-config-volume\") pod \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.435842 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29ab3e44-e399-4021-8cef-f80b2a17b3d8-secret-volume\") pod \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.435980 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2s54\" (UniqueName: \"kubernetes.io/projected/29ab3e44-e399-4021-8cef-f80b2a17b3d8-kube-api-access-m2s54\") pod \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\" (UID: \"29ab3e44-e399-4021-8cef-f80b2a17b3d8\") " Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.436568 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29ab3e44-e399-4021-8cef-f80b2a17b3d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "29ab3e44-e399-4021-8cef-f80b2a17b3d8" (UID: "29ab3e44-e399-4021-8cef-f80b2a17b3d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.441171 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29ab3e44-e399-4021-8cef-f80b2a17b3d8-kube-api-access-m2s54" (OuterVolumeSpecName: "kube-api-access-m2s54") pod "29ab3e44-e399-4021-8cef-f80b2a17b3d8" (UID: "29ab3e44-e399-4021-8cef-f80b2a17b3d8"). InnerVolumeSpecName "kube-api-access-m2s54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.443408 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29ab3e44-e399-4021-8cef-f80b2a17b3d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "29ab3e44-e399-4021-8cef-f80b2a17b3d8" (UID: "29ab3e44-e399-4021-8cef-f80b2a17b3d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.537657 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2s54\" (UniqueName: \"kubernetes.io/projected/29ab3e44-e399-4021-8cef-f80b2a17b3d8-kube-api-access-m2s54\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.537698 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29ab3e44-e399-4021-8cef-f80b2a17b3d8-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:03 crc kubenswrapper[4970]: I1209 12:15:03.537711 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/29ab3e44-e399-4021-8cef-f80b2a17b3d8-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:04 crc kubenswrapper[4970]: I1209 12:15:04.101432 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" event={"ID":"29ab3e44-e399-4021-8cef-f80b2a17b3d8","Type":"ContainerDied","Data":"a0a040197c14cde3f620aa64ca70decfadb2e857cf881f5ce6f77c3b70b222ce"} Dec 09 12:15:04 crc kubenswrapper[4970]: I1209 12:15:04.101479 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0a040197c14cde3f620aa64ca70decfadb2e857cf881f5ce6f77c3b70b222ce" Dec 09 12:15:04 crc kubenswrapper[4970]: I1209 12:15:04.101569 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58" Dec 09 12:15:16 crc kubenswrapper[4970]: I1209 12:15:16.010789 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:15:16 crc kubenswrapper[4970]: I1209 12:15:16.011346 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:15:46 crc kubenswrapper[4970]: I1209 12:15:46.011054 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:15:46 crc kubenswrapper[4970]: I1209 12:15:46.011835 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.135662 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.227764 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65xbr\" (UniqueName: \"kubernetes.io/projected/088add4e-94cf-435b-8190-75d69aa580ea-kube-api-access-65xbr\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.227833 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-metrics-server-audit-profiles\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.227865 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-client-ca-bundle\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.227898 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-client-certs\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.227919 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/088add4e-94cf-435b-8190-75d69aa580ea-audit-log\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.227948 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-server-tls\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.228005 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-configmap-kubelet-serving-ca-bundle\") pod \"088add4e-94cf-435b-8190-75d69aa580ea\" (UID: \"088add4e-94cf-435b-8190-75d69aa580ea\") " Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.228856 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.228968 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.228989 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/088add4e-94cf-435b-8190-75d69aa580ea-audit-log" (OuterVolumeSpecName: "audit-log") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.233017 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.233281 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.233323 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.234829 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088add4e-94cf-435b-8190-75d69aa580ea-kube-api-access-65xbr" (OuterVolumeSpecName: "kube-api-access-65xbr") pod "088add4e-94cf-435b-8190-75d69aa580ea" (UID: "088add4e-94cf-435b-8190-75d69aa580ea"). InnerVolumeSpecName "kube-api-access-65xbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329131 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65xbr\" (UniqueName: \"kubernetes.io/projected/088add4e-94cf-435b-8190-75d69aa580ea-kube-api-access-65xbr\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329169 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-metrics-server-audit-profiles\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329180 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-client-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329190 4970 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-client-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329202 4970 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/088add4e-94cf-435b-8190-75d69aa580ea-audit-log\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329211 4970 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/088add4e-94cf-435b-8190-75d69aa580ea-secret-metrics-server-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.329219 4970 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088add4e-94cf-435b-8190-75d69aa580ea-configmap-kubelet-serving-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.472218 4970 generic.go:334] "Generic (PLEG): container finished" podID="088add4e-94cf-435b-8190-75d69aa580ea" containerID="35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d" exitCode=0 Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.472274 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.472303 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" event={"ID":"088add4e-94cf-435b-8190-75d69aa580ea","Type":"ContainerDied","Data":"35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d"} Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.472365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-64c6668cd-k7dhn" event={"ID":"088add4e-94cf-435b-8190-75d69aa580ea","Type":"ContainerDied","Data":"c703f0fe694b08bc79e4a3711c3c2d17769a7523d7963932ae900276f98a2577"} Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.472408 4970 scope.go:117] "RemoveContainer" containerID="35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.504435 4970 scope.go:117] "RemoveContainer" containerID="35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d" Dec 09 12:15:56 crc kubenswrapper[4970]: E1209 12:15:56.504966 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d\": container with ID starting with 35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d not found: ID does not exist" containerID="35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.505008 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d"} err="failed to get container status \"35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d\": rpc error: code = NotFound desc = could not find container \"35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d\": container with ID starting with 35588acfeb3a7be892b68e9399726d5910e3bd58af734578c3ed888127fd6f2d not found: ID does not exist" Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.518019 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-64c6668cd-k7dhn"] Dec 09 12:15:56 crc kubenswrapper[4970]: I1209 12:15:56.523665 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-64c6668cd-k7dhn"] Dec 09 12:15:57 crc kubenswrapper[4970]: I1209 12:15:57.820549 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088add4e-94cf-435b-8190-75d69aa580ea" path="/var/lib/kubelet/pods/088add4e-94cf-435b-8190-75d69aa580ea/volumes" Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.011098 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.011830 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.011908 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.012982 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"414e7804f598b3aefedf34c5b0fdde4ed9406e7d4b0a5b8d5d6b44c8067d40b9"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.013105 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://414e7804f598b3aefedf34c5b0fdde4ed9406e7d4b0a5b8d5d6b44c8067d40b9" gracePeriod=600 Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.606178 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="414e7804f598b3aefedf34c5b0fdde4ed9406e7d4b0a5b8d5d6b44c8067d40b9" exitCode=0 Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.606276 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"414e7804f598b3aefedf34c5b0fdde4ed9406e7d4b0a5b8d5d6b44c8067d40b9"} Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.606775 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"8a43c19be68e1c39db2fa2acbdc1174785af9071889cf2a0715c26d8f86ac8be"} Dec 09 12:16:16 crc kubenswrapper[4970]: I1209 12:16:16.606799 4970 scope.go:117] "RemoveContainer" containerID="ceaaeecb8c51f67adfcf9a5e80757db7376be95abcbbe814f7fcd0b58bf39bf2" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.851878 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh"] Dec 09 12:16:41 crc kubenswrapper[4970]: E1209 12:16:41.852852 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088add4e-94cf-435b-8190-75d69aa580ea" containerName="metrics-server" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.852872 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="088add4e-94cf-435b-8190-75d69aa580ea" containerName="metrics-server" Dec 09 12:16:41 crc kubenswrapper[4970]: E1209 12:16:41.852911 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29ab3e44-e399-4021-8cef-f80b2a17b3d8" containerName="collect-profiles" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.852923 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="29ab3e44-e399-4021-8cef-f80b2a17b3d8" containerName="collect-profiles" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.853110 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="29ab3e44-e399-4021-8cef-f80b2a17b3d8" containerName="collect-profiles" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.853132 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="088add4e-94cf-435b-8190-75d69aa580ea" containerName="metrics-server" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.854651 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.860204 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh"] Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.882313 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.882414 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fjfv\" (UniqueName: \"kubernetes.io/projected/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-kube-api-access-5fjfv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.882473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.892357 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.983614 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.983667 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.983773 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fjfv\" (UniqueName: \"kubernetes.io/projected/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-kube-api-access-5fjfv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.984143 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:41 crc kubenswrapper[4970]: I1209 12:16:41.984292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:42 crc kubenswrapper[4970]: I1209 12:16:42.002352 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fjfv\" (UniqueName: \"kubernetes.io/projected/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-kube-api-access-5fjfv\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:42 crc kubenswrapper[4970]: I1209 12:16:42.220652 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:42 crc kubenswrapper[4970]: I1209 12:16:42.409939 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh"] Dec 09 12:16:42 crc kubenswrapper[4970]: I1209 12:16:42.787177 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerStarted","Data":"4f30a5db5ba124326d4bc29c69b3c31983e7737f4af278ac63f64ac1092aa458"} Dec 09 12:16:42 crc kubenswrapper[4970]: I1209 12:16:42.787863 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerStarted","Data":"5bbd4454bfda7605b6bf3256ecad6b3b8fe90cc5c5f719a0f648b8cc51d7e003"} Dec 09 12:16:43 crc kubenswrapper[4970]: I1209 12:16:43.795733 4970 generic.go:334] "Generic (PLEG): container finished" podID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerID="4f30a5db5ba124326d4bc29c69b3c31983e7737f4af278ac63f64ac1092aa458" exitCode=0 Dec 09 12:16:43 crc kubenswrapper[4970]: I1209 12:16:43.795793 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerDied","Data":"4f30a5db5ba124326d4bc29c69b3c31983e7737f4af278ac63f64ac1092aa458"} Dec 09 12:16:43 crc kubenswrapper[4970]: I1209 12:16:43.798291 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:16:47 crc kubenswrapper[4970]: I1209 12:16:47.825789 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerStarted","Data":"750295eed164159db4bf38634f4481b1cef782ce04d245446b19deb8d603e79f"} Dec 09 12:16:48 crc kubenswrapper[4970]: I1209 12:16:48.832610 4970 generic.go:334] "Generic (PLEG): container finished" podID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerID="750295eed164159db4bf38634f4481b1cef782ce04d245446b19deb8d603e79f" exitCode=0 Dec 09 12:16:48 crc kubenswrapper[4970]: I1209 12:16:48.832919 4970 generic.go:334] "Generic (PLEG): container finished" podID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerID="fab6ca77c18f5b3ce6b901e112bb62eaa3eb33d10fc387c5d2b04fc804346e2d" exitCode=0 Dec 09 12:16:48 crc kubenswrapper[4970]: I1209 12:16:48.832700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerDied","Data":"750295eed164159db4bf38634f4481b1cef782ce04d245446b19deb8d603e79f"} Dec 09 12:16:48 crc kubenswrapper[4970]: I1209 12:16:48.832953 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerDied","Data":"fab6ca77c18f5b3ce6b901e112bb62eaa3eb33d10fc387c5d2b04fc804346e2d"} Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.128337 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.306167 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-bundle\") pod \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.306298 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fjfv\" (UniqueName: \"kubernetes.io/projected/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-kube-api-access-5fjfv\") pod \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.306395 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-util\") pod \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\" (UID: \"ce40a378-1c98-4f0e-9f2d-065ea5718fb6\") " Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.311486 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-bundle" (OuterVolumeSpecName: "bundle") pod "ce40a378-1c98-4f0e-9f2d-065ea5718fb6" (UID: "ce40a378-1c98-4f0e-9f2d-065ea5718fb6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.315493 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-kube-api-access-5fjfv" (OuterVolumeSpecName: "kube-api-access-5fjfv") pod "ce40a378-1c98-4f0e-9f2d-065ea5718fb6" (UID: "ce40a378-1c98-4f0e-9f2d-065ea5718fb6"). InnerVolumeSpecName "kube-api-access-5fjfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.316805 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-util" (OuterVolumeSpecName: "util") pod "ce40a378-1c98-4f0e-9f2d-065ea5718fb6" (UID: "ce40a378-1c98-4f0e-9f2d-065ea5718fb6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.408899 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.408963 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fjfv\" (UniqueName: \"kubernetes.io/projected/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-kube-api-access-5fjfv\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.409004 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ce40a378-1c98-4f0e-9f2d-065ea5718fb6-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.850614 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" event={"ID":"ce40a378-1c98-4f0e-9f2d-065ea5718fb6","Type":"ContainerDied","Data":"5bbd4454bfda7605b6bf3256ecad6b3b8fe90cc5c5f719a0f648b8cc51d7e003"} Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.850650 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bbd4454bfda7605b6bf3256ecad6b3b8fe90cc5c5f719a0f648b8cc51d7e003" Dec 09 12:16:50 crc kubenswrapper[4970]: I1209 12:16:50.850773 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh" Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.794417 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sxdvn"] Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795146 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-controller" containerID="cri-o://eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795211 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="nbdb" containerID="cri-o://ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795231 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="sbdb" containerID="cri-o://bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795262 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795275 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="northd" containerID="cri-o://71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795286 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-acl-logging" containerID="cri-o://d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.795328 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-node" containerID="cri-o://ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" gracePeriod=30 Dec 09 12:16:52 crc kubenswrapper[4970]: I1209 12:16:52.823420 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" containerID="cri-o://6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" gracePeriod=30 Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.130153 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-conmon-eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-conmon-bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-conmon-7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-conmon-6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-conmon-ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod515fe67d_b7b7_4edb_a2b0_6f8794e8d802.slice/crio-conmon-ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.578071 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/3.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.579947 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovn-acl-logging/0.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.580526 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovn-controller/0.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.581070 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.664869 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vc8lk"] Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665114 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="nbdb" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665134 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="nbdb" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665147 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="pull" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665154 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="pull" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665162 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665170 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665192 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665198 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665205 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="util" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665211 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="util" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665218 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665223 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665236 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kubecfg-setup" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665262 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kubecfg-setup" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665275 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="extract" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665282 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="extract" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665292 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665298 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665308 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-acl-logging" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665313 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-acl-logging" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665326 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665333 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665343 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-node" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665350 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-node" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665360 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665368 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665381 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="sbdb" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665388 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="sbdb" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665398 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="northd" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665406 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="northd" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665533 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="sbdb" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665552 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-node" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665560 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="nbdb" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665568 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="northd" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665577 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665588 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665598 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665606 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce40a378-1c98-4f0e-9f2d-065ea5718fb6" containerName="extract" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665613 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665623 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovn-acl-logging" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665630 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665640 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665649 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.665758 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.665767 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerName="ovnkube-controller" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.668017 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769691 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-systemd\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769750 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-kubelet\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769776 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-etc-openvswitch\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769804 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-slash\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769836 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-ovn-kubernetes\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769893 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdl75\" (UniqueName: \"kubernetes.io/projected/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-kube-api-access-xdl75\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769936 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-script-lib\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.769969 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-config\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770018 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-env-overrides\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770044 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-ovn\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770079 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-var-lib-cni-networks-ovn-kubernetes\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770097 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-log-socket\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770120 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-var-lib-openvswitch\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770140 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-bin\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770167 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-systemd-units\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770194 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-netns\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770222 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-netd\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770260 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovn-node-metrics-cert\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770291 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-openvswitch\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770313 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-node-log\") pod \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\" (UID: \"515fe67d-b7b7-4edb-a2b0-6f8794e8d802\") " Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770469 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-cni-netd\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770502 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-var-lib-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770529 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50b78386-c866-4133-bcee-858f47ff15c7-ovn-node-metrics-cert\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770558 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-node-log\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770601 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-ovnkube-script-lib\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770621 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt79l\" (UniqueName: \"kubernetes.io/projected/50b78386-c866-4133-bcee-858f47ff15c7-kube-api-access-qt79l\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770647 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-systemd-units\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770673 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-slash\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-ovnkube-config\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770719 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-etc-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770740 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-env-overrides\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-log-socket\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770797 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-ovn\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770820 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-systemd\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770863 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-cni-bin\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770882 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770903 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-run-netns\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-kubelet\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770946 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.770971 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-run-ovn-kubernetes\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.771826 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-node-log" (OuterVolumeSpecName: "node-log") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.771891 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.771911 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.771928 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-slash" (OuterVolumeSpecName: "host-slash") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.771945 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772187 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772230 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-log-socket" (OuterVolumeSpecName: "log-socket") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772333 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772384 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772404 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772404 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772422 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772449 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772459 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772563 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772714 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.772861 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.786733 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-kube-api-access-xdl75" (OuterVolumeSpecName: "kube-api-access-xdl75") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "kube-api-access-xdl75". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.787670 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.790213 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "515fe67d-b7b7-4edb-a2b0-6f8794e8d802" (UID: "515fe67d-b7b7-4edb-a2b0-6f8794e8d802"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871704 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-cni-netd\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871748 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-var-lib-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871774 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50b78386-c866-4133-bcee-858f47ff15c7-ovn-node-metrics-cert\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871799 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-node-log\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871831 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-var-lib-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871845 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-ovnkube-script-lib\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871796 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-cni-netd\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871868 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt79l\" (UniqueName: \"kubernetes.io/projected/50b78386-c866-4133-bcee-858f47ff15c7-kube-api-access-qt79l\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871926 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-systemd-units\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871938 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-node-log\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.871974 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-slash\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872000 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-ovnkube-config\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872023 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-systemd-units\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872035 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-env-overrides\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872121 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-etc-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872080 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-slash\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872157 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-log-socket\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872175 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-etc-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872183 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-ovn\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872202 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-systemd\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-cni-bin\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872261 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-run-netns\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872277 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-systemd\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872293 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-cni-bin\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872299 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872279 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-openvswitch\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872313 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-run-ovn\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872357 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-kubelet\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872323 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-run-netns\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872335 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-kubelet\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872267 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-log-socket\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872395 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872436 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-run-ovn-kubernetes\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872493 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872507 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872519 4970 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872532 4970 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872542 4970 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872552 4970 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872565 4970 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872575 4970 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872586 4970 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872597 4970 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872608 4970 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872617 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872627 4970 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872637 4970 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872647 4970 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872658 4970 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872669 4970 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872679 4970 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872691 4970 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872704 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdl75\" (UniqueName: \"kubernetes.io/projected/515fe67d-b7b7-4edb-a2b0-6f8794e8d802-kube-api-access-xdl75\") on node \"crc\" DevicePath \"\"" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-run-ovn-kubernetes\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872737 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-ovnkube-config\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872765 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/50b78386-c866-4133-bcee-858f47ff15c7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872791 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-ovnkube-script-lib\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.872900 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/50b78386-c866-4133-bcee-858f47ff15c7-env-overrides\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.878863 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/50b78386-c866-4133-bcee-858f47ff15c7-ovn-node-metrics-cert\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.879930 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/2.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.880482 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/1.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.880541 4970 generic.go:334] "Generic (PLEG): container finished" podID="81da4c74-d93e-4a7a-848a-c3539268368b" containerID="102b9e9e6ec75dba5b2b5ece21c19f866f26af414d0c0543cb6313adbab44221" exitCode=2 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.880616 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerDied","Data":"102b9e9e6ec75dba5b2b5ece21c19f866f26af414d0c0543cb6313adbab44221"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.880687 4970 scope.go:117] "RemoveContainer" containerID="e2ab60fae86ad3f0324f90bd9aec3bd2d65698da0aad755faa0db18178f08bee" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.881218 4970 scope.go:117] "RemoveContainer" containerID="102b9e9e6ec75dba5b2b5ece21c19f866f26af414d0c0543cb6313adbab44221" Dec 09 12:16:54 crc kubenswrapper[4970]: E1209 12:16:54.881474 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sgdqg_openshift-multus(81da4c74-d93e-4a7a-848a-c3539268368b)\"" pod="openshift-multus/multus-sgdqg" podUID="81da4c74-d93e-4a7a-848a-c3539268368b" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.883111 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovnkube-controller/3.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885028 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovn-acl-logging/0.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885399 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-sxdvn_515fe67d-b7b7-4edb-a2b0-6f8794e8d802/ovn-controller/0.log" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885681 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885702 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885709 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885718 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885725 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885732 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" exitCode=0 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885739 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" exitCode=143 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885746 4970 generic.go:334] "Generic (PLEG): container finished" podID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" exitCode=143 Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885763 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885786 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885797 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885805 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885815 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885825 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885834 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885845 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885851 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885857 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885862 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885867 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885872 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885879 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885883 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885888 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885894 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885902 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885908 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885913 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885918 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885922 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885928 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885933 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885938 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885942 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885947 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885953 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885960 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885966 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885971 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885976 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885980 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885985 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885990 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.885995 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886001 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886005 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886012 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" event={"ID":"515fe67d-b7b7-4edb-a2b0-6f8794e8d802","Type":"ContainerDied","Data":"6d84da9642b11c6b8f28fb8bcffdd37f48498488a74c0d511a91a3f95a2b4aa3"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886019 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886025 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886031 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886056 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886065 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886070 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886074 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886080 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886085 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886090 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.886162 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sxdvn" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.904833 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt79l\" (UniqueName: \"kubernetes.io/projected/50b78386-c866-4133-bcee-858f47ff15c7-kube-api-access-qt79l\") pod \"ovnkube-node-vc8lk\" (UID: \"50b78386-c866-4133-bcee-858f47ff15c7\") " pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.968914 4970 scope.go:117] "RemoveContainer" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.980241 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:16:54 crc kubenswrapper[4970]: I1209 12:16:54.989429 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.013363 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sxdvn"] Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.015584 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sxdvn"] Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.027426 4970 scope.go:117] "RemoveContainer" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" Dec 09 12:16:55 crc kubenswrapper[4970]: W1209 12:16:55.047417 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50b78386_c866_4133_bcee_858f47ff15c7.slice/crio-74ff4146f1075c722ce2e5740db8265ada07e09b0bbdadc91ea41fb85ed470a5 WatchSource:0}: Error finding container 74ff4146f1075c722ce2e5740db8265ada07e09b0bbdadc91ea41fb85ed470a5: Status 404 returned error can't find the container with id 74ff4146f1075c722ce2e5740db8265ada07e09b0bbdadc91ea41fb85ed470a5 Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.085487 4970 scope.go:117] "RemoveContainer" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.118139 4970 scope.go:117] "RemoveContainer" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.139413 4970 scope.go:117] "RemoveContainer" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.163266 4970 scope.go:117] "RemoveContainer" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.180670 4970 scope.go:117] "RemoveContainer" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.204135 4970 scope.go:117] "RemoveContainer" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.221412 4970 scope.go:117] "RemoveContainer" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.242430 4970 scope.go:117] "RemoveContainer" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.243498 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": container with ID starting with 6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6 not found: ID does not exist" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.243536 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} err="failed to get container status \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": rpc error: code = NotFound desc = could not find container \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": container with ID starting with 6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.243566 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.243940 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": container with ID starting with 444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b not found: ID does not exist" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.243974 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} err="failed to get container status \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": rpc error: code = NotFound desc = could not find container \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": container with ID starting with 444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.243993 4970 scope.go:117] "RemoveContainer" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.244227 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": container with ID starting with bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457 not found: ID does not exist" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.244273 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} err="failed to get container status \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": rpc error: code = NotFound desc = could not find container \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": container with ID starting with bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.244292 4970 scope.go:117] "RemoveContainer" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.244487 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": container with ID starting with ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb not found: ID does not exist" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.244520 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} err="failed to get container status \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": rpc error: code = NotFound desc = could not find container \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": container with ID starting with ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.244536 4970 scope.go:117] "RemoveContainer" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.244745 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": container with ID starting with 71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8 not found: ID does not exist" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.244774 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} err="failed to get container status \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": rpc error: code = NotFound desc = could not find container \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": container with ID starting with 71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.244791 4970 scope.go:117] "RemoveContainer" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.245007 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": container with ID starting with 7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b not found: ID does not exist" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245033 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} err="failed to get container status \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": rpc error: code = NotFound desc = could not find container \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": container with ID starting with 7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245053 4970 scope.go:117] "RemoveContainer" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.245385 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": container with ID starting with ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8 not found: ID does not exist" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245414 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} err="failed to get container status \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": rpc error: code = NotFound desc = could not find container \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": container with ID starting with ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245431 4970 scope.go:117] "RemoveContainer" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.245658 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": container with ID starting with d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47 not found: ID does not exist" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245685 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} err="failed to get container status \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": rpc error: code = NotFound desc = could not find container \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": container with ID starting with d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245702 4970 scope.go:117] "RemoveContainer" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.245919 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": container with ID starting with eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45 not found: ID does not exist" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245958 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} err="failed to get container status \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": rpc error: code = NotFound desc = could not find container \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": container with ID starting with eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.245980 4970 scope.go:117] "RemoveContainer" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" Dec 09 12:16:55 crc kubenswrapper[4970]: E1209 12:16:55.246179 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": container with ID starting with 4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58 not found: ID does not exist" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246207 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} err="failed to get container status \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": rpc error: code = NotFound desc = could not find container \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": container with ID starting with 4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246225 4970 scope.go:117] "RemoveContainer" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246511 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} err="failed to get container status \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": rpc error: code = NotFound desc = could not find container \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": container with ID starting with 6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246535 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246721 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} err="failed to get container status \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": rpc error: code = NotFound desc = could not find container \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": container with ID starting with 444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246746 4970 scope.go:117] "RemoveContainer" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246965 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} err="failed to get container status \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": rpc error: code = NotFound desc = could not find container \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": container with ID starting with bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.246990 4970 scope.go:117] "RemoveContainer" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.247184 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} err="failed to get container status \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": rpc error: code = NotFound desc = could not find container \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": container with ID starting with ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.247207 4970 scope.go:117] "RemoveContainer" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.247449 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} err="failed to get container status \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": rpc error: code = NotFound desc = could not find container \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": container with ID starting with 71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.247474 4970 scope.go:117] "RemoveContainer" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.247848 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} err="failed to get container status \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": rpc error: code = NotFound desc = could not find container \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": container with ID starting with 7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.247869 4970 scope.go:117] "RemoveContainer" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.248285 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} err="failed to get container status \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": rpc error: code = NotFound desc = could not find container \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": container with ID starting with ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.248309 4970 scope.go:117] "RemoveContainer" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.248551 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} err="failed to get container status \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": rpc error: code = NotFound desc = could not find container \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": container with ID starting with d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.248575 4970 scope.go:117] "RemoveContainer" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.248927 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} err="failed to get container status \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": rpc error: code = NotFound desc = could not find container \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": container with ID starting with eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.248951 4970 scope.go:117] "RemoveContainer" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.249384 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} err="failed to get container status \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": rpc error: code = NotFound desc = could not find container \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": container with ID starting with 4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.249445 4970 scope.go:117] "RemoveContainer" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.249762 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} err="failed to get container status \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": rpc error: code = NotFound desc = could not find container \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": container with ID starting with 6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.249790 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.250081 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} err="failed to get container status \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": rpc error: code = NotFound desc = could not find container \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": container with ID starting with 444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.250110 4970 scope.go:117] "RemoveContainer" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.250438 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} err="failed to get container status \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": rpc error: code = NotFound desc = could not find container \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": container with ID starting with bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.250484 4970 scope.go:117] "RemoveContainer" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.250803 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} err="failed to get container status \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": rpc error: code = NotFound desc = could not find container \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": container with ID starting with ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.250830 4970 scope.go:117] "RemoveContainer" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.251124 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} err="failed to get container status \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": rpc error: code = NotFound desc = could not find container \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": container with ID starting with 71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.251175 4970 scope.go:117] "RemoveContainer" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.251459 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} err="failed to get container status \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": rpc error: code = NotFound desc = could not find container \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": container with ID starting with 7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.251487 4970 scope.go:117] "RemoveContainer" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.251714 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} err="failed to get container status \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": rpc error: code = NotFound desc = could not find container \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": container with ID starting with ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.251735 4970 scope.go:117] "RemoveContainer" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252021 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} err="failed to get container status \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": rpc error: code = NotFound desc = could not find container \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": container with ID starting with d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252043 4970 scope.go:117] "RemoveContainer" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252287 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} err="failed to get container status \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": rpc error: code = NotFound desc = could not find container \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": container with ID starting with eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252324 4970 scope.go:117] "RemoveContainer" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252536 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} err="failed to get container status \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": rpc error: code = NotFound desc = could not find container \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": container with ID starting with 4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252608 4970 scope.go:117] "RemoveContainer" containerID="6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252834 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6"} err="failed to get container status \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": rpc error: code = NotFound desc = could not find container \"6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6\": container with ID starting with 6aa5d6628357fda1bcdb5cf923e740278d39c3a4e8eed248727e8db456cf0bc6 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.252858 4970 scope.go:117] "RemoveContainer" containerID="444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.253103 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b"} err="failed to get container status \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": rpc error: code = NotFound desc = could not find container \"444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b\": container with ID starting with 444b5f77be105a9b34a1f339e750641ecf51dc33674ae2f1f57b3d99301d720b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.253137 4970 scope.go:117] "RemoveContainer" containerID="bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.253404 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457"} err="failed to get container status \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": rpc error: code = NotFound desc = could not find container \"bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457\": container with ID starting with bd173f6b2d6c5c818092db306f2c730cad177f1d5d3f7b57b187e0d96edc3457 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.253438 4970 scope.go:117] "RemoveContainer" containerID="ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.256476 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb"} err="failed to get container status \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": rpc error: code = NotFound desc = could not find container \"ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb\": container with ID starting with ee6d50dfda22477197071561a4703a001050c59f824dcd2a908e7b918eaef8bb not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.256525 4970 scope.go:117] "RemoveContainer" containerID="71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.257432 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8"} err="failed to get container status \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": rpc error: code = NotFound desc = could not find container \"71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8\": container with ID starting with 71805178c16d6d8b8b261b237ee6a6790a7fd974f67a7000ed77adf924d5f9e8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.257476 4970 scope.go:117] "RemoveContainer" containerID="7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.257759 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b"} err="failed to get container status \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": rpc error: code = NotFound desc = could not find container \"7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b\": container with ID starting with 7025c5cbb905edc6daf306c4e617b0cb59658e8239bd66c496593ff44296450b not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.257783 4970 scope.go:117] "RemoveContainer" containerID="ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.258154 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8"} err="failed to get container status \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": rpc error: code = NotFound desc = could not find container \"ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8\": container with ID starting with ea5012f691fe2642c13609c89675a96bb8a6abff3722bc856a589c6824e892c8 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.258182 4970 scope.go:117] "RemoveContainer" containerID="d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.258539 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47"} err="failed to get container status \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": rpc error: code = NotFound desc = could not find container \"d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47\": container with ID starting with d7c17bd314906528a179b27c77b2a9f8e7a7dcee766cba58e27fc44e6c302b47 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.258565 4970 scope.go:117] "RemoveContainer" containerID="eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.258787 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45"} err="failed to get container status \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": rpc error: code = NotFound desc = could not find container \"eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45\": container with ID starting with eb0064e6a7d7c4e76e9f10b6f88f5f4e173dfe08fe8a4e7cfa6ec64d32d56b45 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.258813 4970 scope.go:117] "RemoveContainer" containerID="4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.259039 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58"} err="failed to get container status \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": rpc error: code = NotFound desc = could not find container \"4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58\": container with ID starting with 4e7ffc4c6133a66fe76d7be11d510b54b4d558647fe9dff52afc782e9a1b4a58 not found: ID does not exist" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.819176 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="515fe67d-b7b7-4edb-a2b0-6f8794e8d802" path="/var/lib/kubelet/pods/515fe67d-b7b7-4edb-a2b0-6f8794e8d802/volumes" Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.905454 4970 generic.go:334] "Generic (PLEG): container finished" podID="50b78386-c866-4133-bcee-858f47ff15c7" containerID="649cda9cac067c65f9f99608858508f682caa60160e848265b3677bfb2c6b5a5" exitCode=0 Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.905564 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerDied","Data":"649cda9cac067c65f9f99608858508f682caa60160e848265b3677bfb2c6b5a5"} Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.905598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"74ff4146f1075c722ce2e5740db8265ada07e09b0bbdadc91ea41fb85ed470a5"} Dec 09 12:16:55 crc kubenswrapper[4970]: I1209 12:16:55.922882 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/2.log" Dec 09 12:16:56 crc kubenswrapper[4970]: I1209 12:16:56.936870 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"d5afe7dd708b3f0117788315502a22addc9c516c6b0e2138ec3ee5ee30910f08"} Dec 09 12:16:56 crc kubenswrapper[4970]: I1209 12:16:56.937364 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"54e7ef2c68fc6588d292080d40126501f61f743b7e18dae9a3868c425f83db8c"} Dec 09 12:16:56 crc kubenswrapper[4970]: I1209 12:16:56.937374 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"f52ce12d56bd33f529fe94f87e9e7420fda69b58e290cbdaaf1993110c4bbcfa"} Dec 09 12:16:56 crc kubenswrapper[4970]: I1209 12:16:56.937382 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"2902896ee4b5f9435efd70d1a0d76517421c1cd0e92646593ab728cdf639fe24"} Dec 09 12:16:56 crc kubenswrapper[4970]: I1209 12:16:56.937390 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"ab24b9302fd6201f29d027daae93725624d8c3870b127310f0f5f5fdc5464316"} Dec 09 12:16:56 crc kubenswrapper[4970]: I1209 12:16:56.937398 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"f7c995bed5481d0d0e0fda546a1ccc49fdcc7f4e9cc145f8406404642f6d7ea0"} Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.724963 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg"] Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.725899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.730329 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.730532 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-24gd5" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.732229 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.743144 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rkpn\" (UniqueName: \"kubernetes.io/projected/0ccba78f-ff44-4497-85b4-9c66a9289dc8-kube-api-access-7rkpn\") pod \"obo-prometheus-operator-668cf9dfbb-p56cg\" (UID: \"0ccba78f-ff44-4497-85b4-9c66a9289dc8\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.844210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rkpn\" (UniqueName: \"kubernetes.io/projected/0ccba78f-ff44-4497-85b4-9c66a9289dc8-kube-api-access-7rkpn\") pod \"obo-prometheus-operator-668cf9dfbb-p56cg\" (UID: \"0ccba78f-ff44-4497-85b4-9c66a9289dc8\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.864859 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rkpn\" (UniqueName: \"kubernetes.io/projected/0ccba78f-ff44-4497-85b4-9c66a9289dc8-kube-api-access-7rkpn\") pod \"obo-prometheus-operator-668cf9dfbb-p56cg\" (UID: \"0ccba78f-ff44-4497-85b4-9c66a9289dc8\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.883069 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949"] Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.883930 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.894506 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-rkt7d" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.894917 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x"] Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.895671 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.896655 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.945282 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d19808d9-8361-4275-a8ec-9ca8e6d7e806-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x\" (UID: \"d19808d9-8361-4275-a8ec-9ca8e6d7e806\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.945348 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afd71d3b-2f43-4061-951e-5fe3f5480a0d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949\" (UID: \"afd71d3b-2f43-4061-951e-5fe3f5480a0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.945371 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d19808d9-8361-4275-a8ec-9ca8e6d7e806-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x\" (UID: \"d19808d9-8361-4275-a8ec-9ca8e6d7e806\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.945401 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afd71d3b-2f43-4061-951e-5fe3f5480a0d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949\" (UID: \"afd71d3b-2f43-4061-951e-5fe3f5480a0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:16:59 crc kubenswrapper[4970]: I1209 12:16:59.962483 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"2588eb359a100907cde2a9d584a2ad830eace9c2a5089a379fa97dc4e2d5c3ef"} Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.040479 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.046291 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d19808d9-8361-4275-a8ec-9ca8e6d7e806-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x\" (UID: \"d19808d9-8361-4275-a8ec-9ca8e6d7e806\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.046376 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afd71d3b-2f43-4061-951e-5fe3f5480a0d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949\" (UID: \"afd71d3b-2f43-4061-951e-5fe3f5480a0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.046411 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d19808d9-8361-4275-a8ec-9ca8e6d7e806-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x\" (UID: \"d19808d9-8361-4275-a8ec-9ca8e6d7e806\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.046458 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afd71d3b-2f43-4061-951e-5fe3f5480a0d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949\" (UID: \"afd71d3b-2f43-4061-951e-5fe3f5480a0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.047053 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-xmr67"] Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.048006 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.049823 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d19808d9-8361-4275-a8ec-9ca8e6d7e806-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x\" (UID: \"d19808d9-8361-4275-a8ec-9ca8e6d7e806\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.052623 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d19808d9-8361-4275-a8ec-9ca8e6d7e806-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x\" (UID: \"d19808d9-8361-4275-a8ec-9ca8e6d7e806\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.053728 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.053799 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2p6d9" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.054061 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afd71d3b-2f43-4061-951e-5fe3f5480a0d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949\" (UID: \"afd71d3b-2f43-4061-951e-5fe3f5480a0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.054159 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afd71d3b-2f43-4061-951e-5fe3f5480a0d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949\" (UID: \"afd71d3b-2f43-4061-951e-5fe3f5480a0d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.087584 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(21503852fed7acbc6721960951b51bd2704ccbd61d487c98389d90b015586b19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.087666 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(21503852fed7acbc6721960951b51bd2704ccbd61d487c98389d90b015586b19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.087688 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(21503852fed7acbc6721960951b51bd2704ccbd61d487c98389d90b015586b19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.087732 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators(0ccba78f-ff44-4497-85b4-9c66a9289dc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators(0ccba78f-ff44-4497-85b4-9c66a9289dc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(21503852fed7acbc6721960951b51bd2704ccbd61d487c98389d90b015586b19): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" podUID="0ccba78f-ff44-4497-85b4-9c66a9289dc8" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.148100 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eb47195-0ebb-47e8-8685-0523cff07cc4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-xmr67\" (UID: \"7eb47195-0ebb-47e8-8685-0523cff07cc4\") " pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.148173 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhmx6\" (UniqueName: \"kubernetes.io/projected/7eb47195-0ebb-47e8-8685-0523cff07cc4-kube-api-access-lhmx6\") pod \"observability-operator-d8bb48f5d-xmr67\" (UID: \"7eb47195-0ebb-47e8-8685-0523cff07cc4\") " pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.154124 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-f6dfs"] Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.155195 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.158785 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-x2zlg" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.198945 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.213275 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.229695 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(8a604e814d758d14e1339b792bb01d9bd5f7b933689df7112abc512b7e9e925e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.229906 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(8a604e814d758d14e1339b792bb01d9bd5f7b933689df7112abc512b7e9e925e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.230015 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(8a604e814d758d14e1339b792bb01d9bd5f7b933689df7112abc512b7e9e925e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.230295 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators(afd71d3b-2f43-4061-951e-5fe3f5480a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators(afd71d3b-2f43-4061-951e-5fe3f5480a0d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(8a604e814d758d14e1339b792bb01d9bd5f7b933689df7112abc512b7e9e925e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" podUID="afd71d3b-2f43-4061-951e-5fe3f5480a0d" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.242774 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(2e1cd2360bb7d65a9ecb03a18e55753cab74870728eeb5c10fd1e6ccca54ce37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.242910 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(2e1cd2360bb7d65a9ecb03a18e55753cab74870728eeb5c10fd1e6ccca54ce37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.242949 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(2e1cd2360bb7d65a9ecb03a18e55753cab74870728eeb5c10fd1e6ccca54ce37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.243007 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators(d19808d9-8361-4275-a8ec-9ca8e6d7e806)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators(d19808d9-8361-4275-a8ec-9ca8e6d7e806)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(2e1cd2360bb7d65a9ecb03a18e55753cab74870728eeb5c10fd1e6ccca54ce37): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" podUID="d19808d9-8361-4275-a8ec-9ca8e6d7e806" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.249142 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eb47195-0ebb-47e8-8685-0523cff07cc4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-xmr67\" (UID: \"7eb47195-0ebb-47e8-8685-0523cff07cc4\") " pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.249217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/203ce897-84d1-41da-8b0e-7b6ef66698a6-openshift-service-ca\") pod \"perses-operator-5446b9c989-f6dfs\" (UID: \"203ce897-84d1-41da-8b0e-7b6ef66698a6\") " pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.249236 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2n9\" (UniqueName: \"kubernetes.io/projected/203ce897-84d1-41da-8b0e-7b6ef66698a6-kube-api-access-wd2n9\") pod \"perses-operator-5446b9c989-f6dfs\" (UID: \"203ce897-84d1-41da-8b0e-7b6ef66698a6\") " pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.249281 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhmx6\" (UniqueName: \"kubernetes.io/projected/7eb47195-0ebb-47e8-8685-0523cff07cc4-kube-api-access-lhmx6\") pod \"observability-operator-d8bb48f5d-xmr67\" (UID: \"7eb47195-0ebb-47e8-8685-0523cff07cc4\") " pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.253527 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eb47195-0ebb-47e8-8685-0523cff07cc4-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-xmr67\" (UID: \"7eb47195-0ebb-47e8-8685-0523cff07cc4\") " pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.269986 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhmx6\" (UniqueName: \"kubernetes.io/projected/7eb47195-0ebb-47e8-8685-0523cff07cc4-kube-api-access-lhmx6\") pod \"observability-operator-d8bb48f5d-xmr67\" (UID: \"7eb47195-0ebb-47e8-8685-0523cff07cc4\") " pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.350100 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/203ce897-84d1-41da-8b0e-7b6ef66698a6-openshift-service-ca\") pod \"perses-operator-5446b9c989-f6dfs\" (UID: \"203ce897-84d1-41da-8b0e-7b6ef66698a6\") " pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.350148 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2n9\" (UniqueName: \"kubernetes.io/projected/203ce897-84d1-41da-8b0e-7b6ef66698a6-kube-api-access-wd2n9\") pod \"perses-operator-5446b9c989-f6dfs\" (UID: \"203ce897-84d1-41da-8b0e-7b6ef66698a6\") " pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.351274 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/203ce897-84d1-41da-8b0e-7b6ef66698a6-openshift-service-ca\") pod \"perses-operator-5446b9c989-f6dfs\" (UID: \"203ce897-84d1-41da-8b0e-7b6ef66698a6\") " pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.368374 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2n9\" (UniqueName: \"kubernetes.io/projected/203ce897-84d1-41da-8b0e-7b6ef66698a6-kube-api-access-wd2n9\") pod \"perses-operator-5446b9c989-f6dfs\" (UID: \"203ce897-84d1-41da-8b0e-7b6ef66698a6\") " pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.421301 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.446032 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(0d7e0814c346c4ea43c23af42783b647195383ce9227190d1351a0335a54ea6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.446096 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(0d7e0814c346c4ea43c23af42783b647195383ce9227190d1351a0335a54ea6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.446115 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(0d7e0814c346c4ea43c23af42783b647195383ce9227190d1351a0335a54ea6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.446160 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-xmr67_openshift-operators(7eb47195-0ebb-47e8-8685-0523cff07cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-xmr67_openshift-operators(7eb47195-0ebb-47e8-8685-0523cff07cc4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(0d7e0814c346c4ea43c23af42783b647195383ce9227190d1351a0335a54ea6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" podUID="7eb47195-0ebb-47e8-8685-0523cff07cc4" Dec 09 12:17:00 crc kubenswrapper[4970]: I1209 12:17:00.472437 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.495087 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(68ba44d1cd851883168a16427ee15ff0931a6ae7e0b0417cf7a221114c4734cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.495167 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(68ba44d1cd851883168a16427ee15ff0931a6ae7e0b0417cf7a221114c4734cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.495191 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(68ba44d1cd851883168a16427ee15ff0931a6ae7e0b0417cf7a221114c4734cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:00 crc kubenswrapper[4970]: E1209 12:17:00.495234 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-f6dfs_openshift-operators(203ce897-84d1-41da-8b0e-7b6ef66698a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-f6dfs_openshift-operators(203ce897-84d1-41da-8b0e-7b6ef66698a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(68ba44d1cd851883168a16427ee15ff0931a6ae7e0b0417cf7a221114c4734cd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" podUID="203ce897-84d1-41da-8b0e-7b6ef66698a6" Dec 09 12:17:01 crc kubenswrapper[4970]: I1209 12:17:01.978590 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" event={"ID":"50b78386-c866-4133-bcee-858f47ff15c7","Type":"ContainerStarted","Data":"29cb2d3aab33f512971a1c0c09a1e06e738563e638765ceff266071358d56584"} Dec 09 12:17:01 crc kubenswrapper[4970]: I1209 12:17:01.979037 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:17:01 crc kubenswrapper[4970]: I1209 12:17:01.979129 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:17:01 crc kubenswrapper[4970]: I1209 12:17:01.979209 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.010686 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.010759 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.016811 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" podStartSLOduration=8.016792367 podStartE2EDuration="8.016792367s" podCreationTimestamp="2025-12-09 12:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:17:02.011992733 +0000 UTC m=+634.572473794" watchObservedRunningTime="2025-12-09 12:17:02.016792367 +0000 UTC m=+634.577273418" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.195375 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x"] Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.195476 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.195883 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.211227 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949"] Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.211592 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.212083 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.218496 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg"] Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.218575 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.218975 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.222650 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-f6dfs"] Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.222710 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.222954 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.276783 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-xmr67"] Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.276874 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:02 crc kubenswrapper[4970]: I1209 12:17:02.277354 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.298417 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(3ff2a00e5f2277fbdcc701bfcfc3bb2a2641abd3034e5f7dd0ee612e4df594ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.298496 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(3ff2a00e5f2277fbdcc701bfcfc3bb2a2641abd3034e5f7dd0ee612e4df594ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.298518 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(3ff2a00e5f2277fbdcc701bfcfc3bb2a2641abd3034e5f7dd0ee612e4df594ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.298561 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators(d19808d9-8361-4275-a8ec-9ca8e6d7e806)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators(d19808d9-8361-4275-a8ec-9ca8e6d7e806)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(3ff2a00e5f2277fbdcc701bfcfc3bb2a2641abd3034e5f7dd0ee612e4df594ba): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" podUID="d19808d9-8361-4275-a8ec-9ca8e6d7e806" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.364380 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(9f0602f3bda802af849772db77fc84410ee6ddd42c21a31b141905f377d6dcee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.364442 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(9f0602f3bda802af849772db77fc84410ee6ddd42c21a31b141905f377d6dcee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.364463 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(9f0602f3bda802af849772db77fc84410ee6ddd42c21a31b141905f377d6dcee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.364502 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators(afd71d3b-2f43-4061-951e-5fe3f5480a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators(afd71d3b-2f43-4061-951e-5fe3f5480a0d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(9f0602f3bda802af849772db77fc84410ee6ddd42c21a31b141905f377d6dcee): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" podUID="afd71d3b-2f43-4061-951e-5fe3f5480a0d" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.383868 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(af8b2e5b702de7c00c45e7ae37837a2c238b9fd1c465419b90220353cff9ccb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.383944 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(af8b2e5b702de7c00c45e7ae37837a2c238b9fd1c465419b90220353cff9ccb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.383964 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(af8b2e5b702de7c00c45e7ae37837a2c238b9fd1c465419b90220353cff9ccb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.384017 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators(0ccba78f-ff44-4497-85b4-9c66a9289dc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators(0ccba78f-ff44-4497-85b4-9c66a9289dc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(af8b2e5b702de7c00c45e7ae37837a2c238b9fd1c465419b90220353cff9ccb8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" podUID="0ccba78f-ff44-4497-85b4-9c66a9289dc8" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.390216 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(5e31281228bbf81c9347c5a7f13043b758e9dfd7dd33ba69365153b919eefc3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.390303 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(5e31281228bbf81c9347c5a7f13043b758e9dfd7dd33ba69365153b919eefc3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.390341 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(5e31281228bbf81c9347c5a7f13043b758e9dfd7dd33ba69365153b919eefc3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.390383 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-xmr67_openshift-operators(7eb47195-0ebb-47e8-8685-0523cff07cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-xmr67_openshift-operators(7eb47195-0ebb-47e8-8685-0523cff07cc4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(5e31281228bbf81c9347c5a7f13043b758e9dfd7dd33ba69365153b919eefc3f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" podUID="7eb47195-0ebb-47e8-8685-0523cff07cc4" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.401483 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(b6c9c5fb22d546bd5d5b5c9e9188ec69f355bfe04187aa12a033a43505242ffc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.401561 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(b6c9c5fb22d546bd5d5b5c9e9188ec69f355bfe04187aa12a033a43505242ffc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.401598 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(b6c9c5fb22d546bd5d5b5c9e9188ec69f355bfe04187aa12a033a43505242ffc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:02 crc kubenswrapper[4970]: E1209 12:17:02.401639 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-f6dfs_openshift-operators(203ce897-84d1-41da-8b0e-7b6ef66698a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-f6dfs_openshift-operators(203ce897-84d1-41da-8b0e-7b6ef66698a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(b6c9c5fb22d546bd5d5b5c9e9188ec69f355bfe04187aa12a033a43505242ffc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" podUID="203ce897-84d1-41da-8b0e-7b6ef66698a6" Dec 09 12:17:09 crc kubenswrapper[4970]: I1209 12:17:09.812301 4970 scope.go:117] "RemoveContainer" containerID="102b9e9e6ec75dba5b2b5ece21c19f866f26af414d0c0543cb6313adbab44221" Dec 09 12:17:09 crc kubenswrapper[4970]: E1209 12:17:09.813014 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-sgdqg_openshift-multus(81da4c74-d93e-4a7a-848a-c3539268368b)\"" pod="openshift-multus/multus-sgdqg" podUID="81da4c74-d93e-4a7a-848a-c3539268368b" Dec 09 12:17:13 crc kubenswrapper[4970]: I1209 12:17:13.811574 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:13 crc kubenswrapper[4970]: I1209 12:17:13.812329 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:13 crc kubenswrapper[4970]: E1209 12:17:13.844181 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(574c0203abe7ccec45be1d93ab829b5750fb8fa58e228569802fc02797059e64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:13 crc kubenswrapper[4970]: E1209 12:17:13.844607 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(574c0203abe7ccec45be1d93ab829b5750fb8fa58e228569802fc02797059e64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:13 crc kubenswrapper[4970]: E1209 12:17:13.844686 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(574c0203abe7ccec45be1d93ab829b5750fb8fa58e228569802fc02797059e64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:13 crc kubenswrapper[4970]: E1209 12:17:13.844788 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators(0ccba78f-ff44-4497-85b4-9c66a9289dc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators(0ccba78f-ff44-4497-85b4-9c66a9289dc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-p56cg_openshift-operators_0ccba78f-ff44-4497-85b4-9c66a9289dc8_0(574c0203abe7ccec45be1d93ab829b5750fb8fa58e228569802fc02797059e64): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" podUID="0ccba78f-ff44-4497-85b4-9c66a9289dc8" Dec 09 12:17:14 crc kubenswrapper[4970]: I1209 12:17:14.811786 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:14 crc kubenswrapper[4970]: I1209 12:17:14.812822 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:14 crc kubenswrapper[4970]: I1209 12:17:14.813215 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:14 crc kubenswrapper[4970]: I1209 12:17:14.813530 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.854322 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(7dd862b30dc849a7e413dd5463280d57ad1dd9dc5805652186101c332862d330): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.854421 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(7dd862b30dc849a7e413dd5463280d57ad1dd9dc5805652186101c332862d330): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.854472 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(7dd862b30dc849a7e413dd5463280d57ad1dd9dc5805652186101c332862d330): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.854514 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-xmr67_openshift-operators(7eb47195-0ebb-47e8-8685-0523cff07cc4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-xmr67_openshift-operators(7eb47195-0ebb-47e8-8685-0523cff07cc4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-xmr67_openshift-operators_7eb47195-0ebb-47e8-8685-0523cff07cc4_0(7dd862b30dc849a7e413dd5463280d57ad1dd9dc5805652186101c332862d330): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" podUID="7eb47195-0ebb-47e8-8685-0523cff07cc4" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.863981 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(d608351f646aad9c3b3086e51a2e1122776608623c9f548c1aa80dff7e81c52b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.864075 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(d608351f646aad9c3b3086e51a2e1122776608623c9f548c1aa80dff7e81c52b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.864104 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(d608351f646aad9c3b3086e51a2e1122776608623c9f548c1aa80dff7e81c52b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:14 crc kubenswrapper[4970]: E1209 12:17:14.864172 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators(d19808d9-8361-4275-a8ec-9ca8e6d7e806)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators(d19808d9-8361-4275-a8ec-9ca8e6d7e806)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_openshift-operators_d19808d9-8361-4275-a8ec-9ca8e6d7e806_0(d608351f646aad9c3b3086e51a2e1122776608623c9f548c1aa80dff7e81c52b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" podUID="d19808d9-8361-4275-a8ec-9ca8e6d7e806" Dec 09 12:17:16 crc kubenswrapper[4970]: I1209 12:17:16.812021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:16 crc kubenswrapper[4970]: I1209 12:17:16.812893 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:16 crc kubenswrapper[4970]: E1209 12:17:16.835619 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(52852e3199f62d6ff583a4d539c56a2744db35b6ef9016685aeb04af38deb51b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:16 crc kubenswrapper[4970]: E1209 12:17:16.835698 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(52852e3199f62d6ff583a4d539c56a2744db35b6ef9016685aeb04af38deb51b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:16 crc kubenswrapper[4970]: E1209 12:17:16.835738 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(52852e3199f62d6ff583a4d539c56a2744db35b6ef9016685aeb04af38deb51b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:16 crc kubenswrapper[4970]: E1209 12:17:16.835788 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-f6dfs_openshift-operators(203ce897-84d1-41da-8b0e-7b6ef66698a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-f6dfs_openshift-operators(203ce897-84d1-41da-8b0e-7b6ef66698a6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-f6dfs_openshift-operators_203ce897-84d1-41da-8b0e-7b6ef66698a6_0(52852e3199f62d6ff583a4d539c56a2744db35b6ef9016685aeb04af38deb51b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" podUID="203ce897-84d1-41da-8b0e-7b6ef66698a6" Dec 09 12:17:17 crc kubenswrapper[4970]: I1209 12:17:17.814760 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:17 crc kubenswrapper[4970]: I1209 12:17:17.815600 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:17 crc kubenswrapper[4970]: E1209 12:17:17.838893 4970 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(f532be7eafdab906de38fe13ce2fdc3ac48dc323268ee1b69b0d2933a38725bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 12:17:17 crc kubenswrapper[4970]: E1209 12:17:17.838955 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(f532be7eafdab906de38fe13ce2fdc3ac48dc323268ee1b69b0d2933a38725bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:17 crc kubenswrapper[4970]: E1209 12:17:17.838976 4970 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(f532be7eafdab906de38fe13ce2fdc3ac48dc323268ee1b69b0d2933a38725bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:17 crc kubenswrapper[4970]: E1209 12:17:17.839021 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators(afd71d3b-2f43-4061-951e-5fe3f5480a0d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators(afd71d3b-2f43-4061-951e-5fe3f5480a0d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_openshift-operators_afd71d3b-2f43-4061-951e-5fe3f5480a0d_0(f532be7eafdab906de38fe13ce2fdc3ac48dc323268ee1b69b0d2933a38725bd): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" podUID="afd71d3b-2f43-4061-951e-5fe3f5480a0d" Dec 09 12:17:22 crc kubenswrapper[4970]: I1209 12:17:22.812805 4970 scope.go:117] "RemoveContainer" containerID="102b9e9e6ec75dba5b2b5ece21c19f866f26af414d0c0543cb6313adbab44221" Dec 09 12:17:23 crc kubenswrapper[4970]: I1209 12:17:23.131946 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-sgdqg_81da4c74-d93e-4a7a-848a-c3539268368b/kube-multus/2.log" Dec 09 12:17:23 crc kubenswrapper[4970]: I1209 12:17:23.132377 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-sgdqg" event={"ID":"81da4c74-d93e-4a7a-848a-c3539268368b","Type":"ContainerStarted","Data":"a397728bfcc89eccc3317dc48f44a7f23d3c9935205a8d4efdffc5fcee17bf23"} Dec 09 12:17:24 crc kubenswrapper[4970]: I1209 12:17:24.812643 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:24 crc kubenswrapper[4970]: I1209 12:17:24.813850 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" Dec 09 12:17:25 crc kubenswrapper[4970]: I1209 12:17:25.007846 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vc8lk" Dec 09 12:17:25 crc kubenswrapper[4970]: I1209 12:17:25.232144 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg"] Dec 09 12:17:25 crc kubenswrapper[4970]: W1209 12:17:25.241857 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ccba78f_ff44_4497_85b4_9c66a9289dc8.slice/crio-8bd90b7cd0c18d1cb8177f4fc2fd70d9b98d8be1114a3e0d98d4e7b57cab7ef0 WatchSource:0}: Error finding container 8bd90b7cd0c18d1cb8177f4fc2fd70d9b98d8be1114a3e0d98d4e7b57cab7ef0: Status 404 returned error can't find the container with id 8bd90b7cd0c18d1cb8177f4fc2fd70d9b98d8be1114a3e0d98d4e7b57cab7ef0 Dec 09 12:17:26 crc kubenswrapper[4970]: I1209 12:17:26.150857 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" event={"ID":"0ccba78f-ff44-4497-85b4-9c66a9289dc8","Type":"ContainerStarted","Data":"8bd90b7cd0c18d1cb8177f4fc2fd70d9b98d8be1114a3e0d98d4e7b57cab7ef0"} Dec 09 12:17:27 crc kubenswrapper[4970]: I1209 12:17:27.819153 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:27 crc kubenswrapper[4970]: I1209 12:17:27.825407 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" Dec 09 12:17:28 crc kubenswrapper[4970]: I1209 12:17:28.811762 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:28 crc kubenswrapper[4970]: I1209 12:17:28.812451 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:29 crc kubenswrapper[4970]: I1209 12:17:29.811953 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:29 crc kubenswrapper[4970]: I1209 12:17:29.812476 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:30 crc kubenswrapper[4970]: I1209 12:17:30.812390 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:30 crc kubenswrapper[4970]: I1209 12:17:30.813034 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" Dec 09 12:17:32 crc kubenswrapper[4970]: I1209 12:17:32.694312 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-xmr67"] Dec 09 12:17:32 crc kubenswrapper[4970]: I1209 12:17:32.806323 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x"] Dec 09 12:17:32 crc kubenswrapper[4970]: W1209 12:17:32.807638 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19808d9_8361_4275_a8ec_9ca8e6d7e806.slice/crio-fcc99d75f02446e77486f6e4252180bf46de5d23a3bb02c0bbad4daeb34292e8 WatchSource:0}: Error finding container fcc99d75f02446e77486f6e4252180bf46de5d23a3bb02c0bbad4daeb34292e8: Status 404 returned error can't find the container with id fcc99d75f02446e77486f6e4252180bf46de5d23a3bb02c0bbad4daeb34292e8 Dec 09 12:17:32 crc kubenswrapper[4970]: I1209 12:17:32.812107 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-f6dfs"] Dec 09 12:17:32 crc kubenswrapper[4970]: W1209 12:17:32.812381 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod203ce897_84d1_41da_8b0e_7b6ef66698a6.slice/crio-49c23e4f535427f972e8ae6382e449ebdd4f43636c71829e8d7bb8419b6c81d2 WatchSource:0}: Error finding container 49c23e4f535427f972e8ae6382e449ebdd4f43636c71829e8d7bb8419b6c81d2: Status 404 returned error can't find the container with id 49c23e4f535427f972e8ae6382e449ebdd4f43636c71829e8d7bb8419b6c81d2 Dec 09 12:17:32 crc kubenswrapper[4970]: I1209 12:17:32.817767 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949"] Dec 09 12:17:32 crc kubenswrapper[4970]: W1209 12:17:32.823582 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafd71d3b_2f43_4061_951e_5fe3f5480a0d.slice/crio-09f0060812ff6298d76699dd710a48c52942c1650ac6afc517b3f4b66805a726 WatchSource:0}: Error finding container 09f0060812ff6298d76699dd710a48c52942c1650ac6afc517b3f4b66805a726: Status 404 returned error can't find the container with id 09f0060812ff6298d76699dd710a48c52942c1650ac6afc517b3f4b66805a726 Dec 09 12:17:33 crc kubenswrapper[4970]: I1209 12:17:33.200847 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" event={"ID":"203ce897-84d1-41da-8b0e-7b6ef66698a6","Type":"ContainerStarted","Data":"49c23e4f535427f972e8ae6382e449ebdd4f43636c71829e8d7bb8419b6c81d2"} Dec 09 12:17:33 crc kubenswrapper[4970]: I1209 12:17:33.202283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" event={"ID":"afd71d3b-2f43-4061-951e-5fe3f5480a0d","Type":"ContainerStarted","Data":"09f0060812ff6298d76699dd710a48c52942c1650ac6afc517b3f4b66805a726"} Dec 09 12:17:33 crc kubenswrapper[4970]: I1209 12:17:33.203555 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" event={"ID":"7eb47195-0ebb-47e8-8685-0523cff07cc4","Type":"ContainerStarted","Data":"2244b4a6c42b6c2d1006daea2e44843a16ab163ff3a3e133bcd2a536a7e673c9"} Dec 09 12:17:33 crc kubenswrapper[4970]: I1209 12:17:33.205350 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" event={"ID":"0ccba78f-ff44-4497-85b4-9c66a9289dc8","Type":"ContainerStarted","Data":"78cf9d71fa3580816573049809383773ba1b07d7bb7f186937da763129a0c070"} Dec 09 12:17:33 crc kubenswrapper[4970]: I1209 12:17:33.206801 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" event={"ID":"d19808d9-8361-4275-a8ec-9ca8e6d7e806","Type":"ContainerStarted","Data":"fcc99d75f02446e77486f6e4252180bf46de5d23a3bb02c0bbad4daeb34292e8"} Dec 09 12:17:33 crc kubenswrapper[4970]: I1209 12:17:33.224173 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-p56cg" podStartSLOduration=27.101079121 podStartE2EDuration="34.224152967s" podCreationTimestamp="2025-12-09 12:16:59 +0000 UTC" firstStartedPulling="2025-12-09 12:17:25.244153109 +0000 UTC m=+657.804634160" lastFinishedPulling="2025-12-09 12:17:32.367226955 +0000 UTC m=+664.927708006" observedRunningTime="2025-12-09 12:17:33.222528561 +0000 UTC m=+665.783009612" watchObservedRunningTime="2025-12-09 12:17:33.224152967 +0000 UTC m=+665.784634018" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.239631 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" event={"ID":"203ce897-84d1-41da-8b0e-7b6ef66698a6","Type":"ContainerStarted","Data":"c4a362fff7b2ff87f9f5c6517eb03ad4d319217926889e6b856b2efe376d61ec"} Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.240306 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.240768 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" event={"ID":"afd71d3b-2f43-4061-951e-5fe3f5480a0d","Type":"ContainerStarted","Data":"30b039bd719598995d737faeb34e76787e6842abcb333d9036221dc4ddf3211f"} Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.243187 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" event={"ID":"7eb47195-0ebb-47e8-8685-0523cff07cc4","Type":"ContainerStarted","Data":"59436c49838c01b2924eae0a36045d8633ea55d847ee89ed0d2db3c683d6f37d"} Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.243899 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.244801 4970 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-xmr67 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.28:8081/healthz\": dial tcp 10.217.0.28:8081: connect: connection refused" start-of-body= Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.244845 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" podUID="7eb47195-0ebb-47e8-8685-0523cff07cc4" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.28:8081/healthz\": dial tcp 10.217.0.28:8081: connect: connection refused" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.246356 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" event={"ID":"d19808d9-8361-4275-a8ec-9ca8e6d7e806","Type":"ContainerStarted","Data":"16036c8f26253b17354ff6118139205170337b48f5ffb3d395457f120a9587dc"} Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.260643 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" podStartSLOduration=33.325849849 podStartE2EDuration="38.260617423s" podCreationTimestamp="2025-12-09 12:17:00 +0000 UTC" firstStartedPulling="2025-12-09 12:17:32.815736286 +0000 UTC m=+665.376217337" lastFinishedPulling="2025-12-09 12:17:37.75050386 +0000 UTC m=+670.310984911" observedRunningTime="2025-12-09 12:17:38.252819385 +0000 UTC m=+670.813300426" watchObservedRunningTime="2025-12-09 12:17:38.260617423 +0000 UTC m=+670.821098474" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.274030 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x" podStartSLOduration=34.333928364 podStartE2EDuration="39.274014177s" podCreationTimestamp="2025-12-09 12:16:59 +0000 UTC" firstStartedPulling="2025-12-09 12:17:32.809573463 +0000 UTC m=+665.370054514" lastFinishedPulling="2025-12-09 12:17:37.749659286 +0000 UTC m=+670.310140327" observedRunningTime="2025-12-09 12:17:38.269720177 +0000 UTC m=+670.830201228" watchObservedRunningTime="2025-12-09 12:17:38.274014177 +0000 UTC m=+670.834495228" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.305010 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" podStartSLOduration=33.213669324 podStartE2EDuration="38.304989742s" podCreationTimestamp="2025-12-09 12:17:00 +0000 UTC" firstStartedPulling="2025-12-09 12:17:32.700914038 +0000 UTC m=+665.261395089" lastFinishedPulling="2025-12-09 12:17:37.792234456 +0000 UTC m=+670.352715507" observedRunningTime="2025-12-09 12:17:38.302496423 +0000 UTC m=+670.862977484" watchObservedRunningTime="2025-12-09 12:17:38.304989742 +0000 UTC m=+670.865470793" Dec 09 12:17:38 crc kubenswrapper[4970]: I1209 12:17:38.332798 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949" podStartSLOduration=34.41145857 podStartE2EDuration="39.332778709s" podCreationTimestamp="2025-12-09 12:16:59 +0000 UTC" firstStartedPulling="2025-12-09 12:17:32.826171977 +0000 UTC m=+665.386653028" lastFinishedPulling="2025-12-09 12:17:37.747492116 +0000 UTC m=+670.307973167" observedRunningTime="2025-12-09 12:17:38.330536216 +0000 UTC m=+670.891017277" watchObservedRunningTime="2025-12-09 12:17:38.332778709 +0000 UTC m=+670.893259760" Dec 09 12:17:39 crc kubenswrapper[4970]: I1209 12:17:39.294857 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-xmr67" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.651229 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-rtzqq"] Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.655155 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.658653 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.659498 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.661409 4970 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xl9q6" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.667890 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-rtzqq"] Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.678845 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-8rjw8"] Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.686101 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-8rjw8" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.689404 4970 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-h8cmp" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.707349 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-8rjw8"] Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.719603 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2jdph"] Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.720435 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.723582 4970 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-h8cs7" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.730310 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2jdph"] Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.756584 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqqg\" (UniqueName: \"kubernetes.io/projected/afafc1e2-ad95-4e4b-878d-62163ffa77cf-kube-api-access-9tqqg\") pod \"cert-manager-5b446d88c5-8rjw8\" (UID: \"afafc1e2-ad95-4e4b-878d-62163ffa77cf\") " pod="cert-manager/cert-manager-5b446d88c5-8rjw8" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.756656 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ld5\" (UniqueName: \"kubernetes.io/projected/fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55-kube-api-access-d6ld5\") pod \"cert-manager-cainjector-7f985d654d-rtzqq\" (UID: \"fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.857467 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwl8p\" (UniqueName: \"kubernetes.io/projected/ceaa531e-a688-48ed-a405-74b9cb483ae2-kube-api-access-fwl8p\") pod \"cert-manager-webhook-5655c58dd6-2jdph\" (UID: \"ceaa531e-a688-48ed-a405-74b9cb483ae2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.857532 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqqg\" (UniqueName: \"kubernetes.io/projected/afafc1e2-ad95-4e4b-878d-62163ffa77cf-kube-api-access-9tqqg\") pod \"cert-manager-5b446d88c5-8rjw8\" (UID: \"afafc1e2-ad95-4e4b-878d-62163ffa77cf\") " pod="cert-manager/cert-manager-5b446d88c5-8rjw8" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.857618 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6ld5\" (UniqueName: \"kubernetes.io/projected/fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55-kube-api-access-d6ld5\") pod \"cert-manager-cainjector-7f985d654d-rtzqq\" (UID: \"fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.885859 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6ld5\" (UniqueName: \"kubernetes.io/projected/fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55-kube-api-access-d6ld5\") pod \"cert-manager-cainjector-7f985d654d-rtzqq\" (UID: \"fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.887276 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqqg\" (UniqueName: \"kubernetes.io/projected/afafc1e2-ad95-4e4b-878d-62163ffa77cf-kube-api-access-9tqqg\") pod \"cert-manager-5b446d88c5-8rjw8\" (UID: \"afafc1e2-ad95-4e4b-878d-62163ffa77cf\") " pod="cert-manager/cert-manager-5b446d88c5-8rjw8" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.959172 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwl8p\" (UniqueName: \"kubernetes.io/projected/ceaa531e-a688-48ed-a405-74b9cb483ae2-kube-api-access-fwl8p\") pod \"cert-manager-webhook-5655c58dd6-2jdph\" (UID: \"ceaa531e-a688-48ed-a405-74b9cb483ae2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.978668 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwl8p\" (UniqueName: \"kubernetes.io/projected/ceaa531e-a688-48ed-a405-74b9cb483ae2-kube-api-access-fwl8p\") pod \"cert-manager-webhook-5655c58dd6-2jdph\" (UID: \"ceaa531e-a688-48ed-a405-74b9cb483ae2\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:17:45 crc kubenswrapper[4970]: I1209 12:17:45.989955 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.019604 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-8rjw8" Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.048956 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.257648 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-8rjw8"] Dec 09 12:17:46 crc kubenswrapper[4970]: W1209 12:17:46.259645 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafafc1e2_ad95_4e4b_878d_62163ffa77cf.slice/crio-60ce5af321538b22fc4304ddc124673d6d3390654797948336a2bfccaadff35c WatchSource:0}: Error finding container 60ce5af321538b22fc4304ddc124673d6d3390654797948336a2bfccaadff35c: Status 404 returned error can't find the container with id 60ce5af321538b22fc4304ddc124673d6d3390654797948336a2bfccaadff35c Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.302368 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2jdph"] Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.333481 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-8rjw8" event={"ID":"afafc1e2-ad95-4e4b-878d-62163ffa77cf","Type":"ContainerStarted","Data":"60ce5af321538b22fc4304ddc124673d6d3390654797948336a2bfccaadff35c"} Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.334359 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" event={"ID":"ceaa531e-a688-48ed-a405-74b9cb483ae2","Type":"ContainerStarted","Data":"cce21e474f828671395da7bb0f35c9251b3522eeaa64e8f3fc321d1476af71ed"} Dec 09 12:17:46 crc kubenswrapper[4970]: I1209 12:17:46.428799 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-rtzqq"] Dec 09 12:17:46 crc kubenswrapper[4970]: W1209 12:17:46.431422 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc26ad95_72b7_4cfa_9e37_e9aa0a80bc55.slice/crio-496303ead10d9c2e6e80b1e01f51318a5c3fa4f36afbf67fc7a2b4f7b9e5f81a WatchSource:0}: Error finding container 496303ead10d9c2e6e80b1e01f51318a5c3fa4f36afbf67fc7a2b4f7b9e5f81a: Status 404 returned error can't find the container with id 496303ead10d9c2e6e80b1e01f51318a5c3fa4f36afbf67fc7a2b4f7b9e5f81a Dec 09 12:17:47 crc kubenswrapper[4970]: I1209 12:17:47.343818 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" event={"ID":"fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55","Type":"ContainerStarted","Data":"496303ead10d9c2e6e80b1e01f51318a5c3fa4f36afbf67fc7a2b4f7b9e5f81a"} Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.370585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" event={"ID":"fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55","Type":"ContainerStarted","Data":"32451dd9973d58c6ff465613d30a4aa61abe9b2281e31344211979f34c2cced3"} Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.373225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-8rjw8" event={"ID":"afafc1e2-ad95-4e4b-878d-62163ffa77cf","Type":"ContainerStarted","Data":"6953e348b661353516ac83e661b2dfaad3aeaccb9286813de4ee88b9a50c5aa4"} Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.375217 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" event={"ID":"ceaa531e-a688-48ed-a405-74b9cb483ae2","Type":"ContainerStarted","Data":"b8ba903d270cc89131d46ac96a2284356eb9eb4ec76a13992f6a94612c820934"} Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.375435 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.388983 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-rtzqq" podStartSLOduration=1.638714443 podStartE2EDuration="5.388960333s" podCreationTimestamp="2025-12-09 12:17:45 +0000 UTC" firstStartedPulling="2025-12-09 12:17:46.434563409 +0000 UTC m=+678.995044470" lastFinishedPulling="2025-12-09 12:17:50.184809269 +0000 UTC m=+682.745290360" observedRunningTime="2025-12-09 12:17:50.38491297 +0000 UTC m=+682.945394031" watchObservedRunningTime="2025-12-09 12:17:50.388960333 +0000 UTC m=+682.949441394" Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.401631 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" podStartSLOduration=1.575540527 podStartE2EDuration="5.401616266s" podCreationTimestamp="2025-12-09 12:17:45 +0000 UTC" firstStartedPulling="2025-12-09 12:17:46.303186398 +0000 UTC m=+678.863667449" lastFinishedPulling="2025-12-09 12:17:50.129262137 +0000 UTC m=+682.689743188" observedRunningTime="2025-12-09 12:17:50.399825436 +0000 UTC m=+682.960306497" watchObservedRunningTime="2025-12-09 12:17:50.401616266 +0000 UTC m=+682.962097307" Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.475967 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-f6dfs" Dec 09 12:17:50 crc kubenswrapper[4970]: I1209 12:17:50.493450 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-8rjw8" podStartSLOduration=1.6282751819999999 podStartE2EDuration="5.493426692s" podCreationTimestamp="2025-12-09 12:17:45 +0000 UTC" firstStartedPulling="2025-12-09 12:17:46.261655768 +0000 UTC m=+678.822136819" lastFinishedPulling="2025-12-09 12:17:50.126807278 +0000 UTC m=+682.687288329" observedRunningTime="2025-12-09 12:17:50.419711722 +0000 UTC m=+682.980192773" watchObservedRunningTime="2025-12-09 12:17:50.493426692 +0000 UTC m=+683.053907743" Dec 09 12:17:56 crc kubenswrapper[4970]: I1209 12:17:56.055444 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2jdph" Dec 09 12:18:16 crc kubenswrapper[4970]: I1209 12:18:16.010647 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:18:16 crc kubenswrapper[4970]: I1209 12:18:16.011450 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:18:21 crc kubenswrapper[4970]: I1209 12:18:21.869684 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp"] Dec 09 12:18:21 crc kubenswrapper[4970]: I1209 12:18:21.871599 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:21 crc kubenswrapper[4970]: I1209 12:18:21.876963 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:18:21 crc kubenswrapper[4970]: I1209 12:18:21.885931 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp"] Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.045756 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lrxx\" (UniqueName: \"kubernetes.io/projected/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-kube-api-access-4lrxx\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.045810 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.045973 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.088724 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4"] Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.090155 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.106072 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4"] Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.147815 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9b7\" (UniqueName: \"kubernetes.io/projected/064f1a68-bece-4a7c-b759-f73831fe100b-kube-api-access-nh9b7\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.147946 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lrxx\" (UniqueName: \"kubernetes.io/projected/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-kube-api-access-4lrxx\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.148393 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.148920 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.148994 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.149059 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.149111 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.149516 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.167361 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lrxx\" (UniqueName: \"kubernetes.io/projected/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-kube-api-access-4lrxx\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.187716 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.250428 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.251194 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.251386 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9b7\" (UniqueName: \"kubernetes.io/projected/064f1a68-bece-4a7c-b759-f73831fe100b-kube-api-access-nh9b7\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.251506 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.251839 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.294313 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9b7\" (UniqueName: \"kubernetes.io/projected/064f1a68-bece-4a7c-b759-f73831fe100b-kube-api-access-nh9b7\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.403318 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.413873 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp"] Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.592154 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" event={"ID":"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c","Type":"ContainerStarted","Data":"ea94f5c6b7428e7f28d515ff9936524b8c88d773bfd610e1af919bb001dbe2fa"} Dec 09 12:18:22 crc kubenswrapper[4970]: I1209 12:18:22.824091 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4"] Dec 09 12:18:23 crc kubenswrapper[4970]: I1209 12:18:23.600966 4970 generic.go:334] "Generic (PLEG): container finished" podID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerID="9a67ffbe76f2e5ce6f9bb7d03b479ebe1640c15c8f319a0f8f4a30cb4d392562" exitCode=0 Dec 09 12:18:23 crc kubenswrapper[4970]: I1209 12:18:23.601039 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" event={"ID":"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c","Type":"ContainerDied","Data":"9a67ffbe76f2e5ce6f9bb7d03b479ebe1640c15c8f319a0f8f4a30cb4d392562"} Dec 09 12:18:23 crc kubenswrapper[4970]: I1209 12:18:23.603745 4970 generic.go:334] "Generic (PLEG): container finished" podID="064f1a68-bece-4a7c-b759-f73831fe100b" containerID="60d9c117bb28441a4907d3832d96d6d3009829d05fbebe5082187b111e975cff" exitCode=0 Dec 09 12:18:23 crc kubenswrapper[4970]: I1209 12:18:23.603774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" event={"ID":"064f1a68-bece-4a7c-b759-f73831fe100b","Type":"ContainerDied","Data":"60d9c117bb28441a4907d3832d96d6d3009829d05fbebe5082187b111e975cff"} Dec 09 12:18:23 crc kubenswrapper[4970]: I1209 12:18:23.603790 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" event={"ID":"064f1a68-bece-4a7c-b759-f73831fe100b","Type":"ContainerStarted","Data":"8e0c3d4ae3cbafa91b0e25d2f2a166eb07cb077163947a6d2d2c9eeeff0db772"} Dec 09 12:18:24 crc kubenswrapper[4970]: I1209 12:18:24.612166 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" event={"ID":"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c","Type":"ContainerStarted","Data":"582055356d385a2b7eaa2eac162d44a17444bf60f23e70f394132c9d19ccbc36"} Dec 09 12:18:24 crc kubenswrapper[4970]: E1209 12:18:24.959106 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod064f1a68_bece_4a7c_b759_f73831fe100b.slice/crio-e463f0202003f8989b8ca0e8528ae9409d1d729c5ec9b2dc08e52fc1efbe7cd4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7137ec19_d7d4_44d9_b9ac_6c30c1ec095c.slice/crio-4e3efb827b9e638f4abad834617513c0d4d948e12a0beb57d37ad0b084e18978.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:18:25 crc kubenswrapper[4970]: I1209 12:18:25.621497 4970 generic.go:334] "Generic (PLEG): container finished" podID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerID="582055356d385a2b7eaa2eac162d44a17444bf60f23e70f394132c9d19ccbc36" exitCode=0 Dec 09 12:18:25 crc kubenswrapper[4970]: I1209 12:18:25.621550 4970 generic.go:334] "Generic (PLEG): container finished" podID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerID="4e3efb827b9e638f4abad834617513c0d4d948e12a0beb57d37ad0b084e18978" exitCode=0 Dec 09 12:18:25 crc kubenswrapper[4970]: I1209 12:18:25.621589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" event={"ID":"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c","Type":"ContainerDied","Data":"582055356d385a2b7eaa2eac162d44a17444bf60f23e70f394132c9d19ccbc36"} Dec 09 12:18:25 crc kubenswrapper[4970]: I1209 12:18:25.621645 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" event={"ID":"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c","Type":"ContainerDied","Data":"4e3efb827b9e638f4abad834617513c0d4d948e12a0beb57d37ad0b084e18978"} Dec 09 12:18:25 crc kubenswrapper[4970]: I1209 12:18:25.623452 4970 generic.go:334] "Generic (PLEG): container finished" podID="064f1a68-bece-4a7c-b759-f73831fe100b" containerID="e463f0202003f8989b8ca0e8528ae9409d1d729c5ec9b2dc08e52fc1efbe7cd4" exitCode=0 Dec 09 12:18:25 crc kubenswrapper[4970]: I1209 12:18:25.623488 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" event={"ID":"064f1a68-bece-4a7c-b759-f73831fe100b","Type":"ContainerDied","Data":"e463f0202003f8989b8ca0e8528ae9409d1d729c5ec9b2dc08e52fc1efbe7cd4"} Dec 09 12:18:26 crc kubenswrapper[4970]: I1209 12:18:26.632755 4970 generic.go:334] "Generic (PLEG): container finished" podID="064f1a68-bece-4a7c-b759-f73831fe100b" containerID="c535288d4dc48858c7336321780809b2d74b08351f85e30126c6d02a1395d17d" exitCode=0 Dec 09 12:18:26 crc kubenswrapper[4970]: I1209 12:18:26.632879 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" event={"ID":"064f1a68-bece-4a7c-b759-f73831fe100b","Type":"ContainerDied","Data":"c535288d4dc48858c7336321780809b2d74b08351f85e30126c6d02a1395d17d"} Dec 09 12:18:26 crc kubenswrapper[4970]: I1209 12:18:26.884371 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.023992 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-util\") pod \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.024092 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lrxx\" (UniqueName: \"kubernetes.io/projected/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-kube-api-access-4lrxx\") pod \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.024126 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-bundle\") pod \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\" (UID: \"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c\") " Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.025348 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-bundle" (OuterVolumeSpecName: "bundle") pod "7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" (UID: "7137ec19-d7d4-44d9-b9ac-6c30c1ec095c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.033190 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-kube-api-access-4lrxx" (OuterVolumeSpecName: "kube-api-access-4lrxx") pod "7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" (UID: "7137ec19-d7d4-44d9-b9ac-6c30c1ec095c"). InnerVolumeSpecName "kube-api-access-4lrxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.048583 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-util" (OuterVolumeSpecName: "util") pod "7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" (UID: "7137ec19-d7d4-44d9-b9ac-6c30c1ec095c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.125703 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.125737 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lrxx\" (UniqueName: \"kubernetes.io/projected/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-kube-api-access-4lrxx\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.125751 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7137ec19-d7d4-44d9-b9ac-6c30c1ec095c-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.640268 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" event={"ID":"7137ec19-d7d4-44d9-b9ac-6c30c1ec095c","Type":"ContainerDied","Data":"ea94f5c6b7428e7f28d515ff9936524b8c88d773bfd610e1af919bb001dbe2fa"} Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.640329 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea94f5c6b7428e7f28d515ff9936524b8c88d773bfd610e1af919bb001dbe2fa" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.640298 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp" Dec 09 12:18:27 crc kubenswrapper[4970]: I1209 12:18:27.879201 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.035955 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-util\") pod \"064f1a68-bece-4a7c-b759-f73831fe100b\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.036075 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh9b7\" (UniqueName: \"kubernetes.io/projected/064f1a68-bece-4a7c-b759-f73831fe100b-kube-api-access-nh9b7\") pod \"064f1a68-bece-4a7c-b759-f73831fe100b\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.036115 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-bundle\") pod \"064f1a68-bece-4a7c-b759-f73831fe100b\" (UID: \"064f1a68-bece-4a7c-b759-f73831fe100b\") " Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.036938 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-bundle" (OuterVolumeSpecName: "bundle") pod "064f1a68-bece-4a7c-b759-f73831fe100b" (UID: "064f1a68-bece-4a7c-b759-f73831fe100b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.040767 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/064f1a68-bece-4a7c-b759-f73831fe100b-kube-api-access-nh9b7" (OuterVolumeSpecName: "kube-api-access-nh9b7") pod "064f1a68-bece-4a7c-b759-f73831fe100b" (UID: "064f1a68-bece-4a7c-b759-f73831fe100b"). InnerVolumeSpecName "kube-api-access-nh9b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.057214 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-util" (OuterVolumeSpecName: "util") pod "064f1a68-bece-4a7c-b759-f73831fe100b" (UID: "064f1a68-bece-4a7c-b759-f73831fe100b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.137477 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.137504 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/064f1a68-bece-4a7c-b759-f73831fe100b-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.137515 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh9b7\" (UniqueName: \"kubernetes.io/projected/064f1a68-bece-4a7c-b759-f73831fe100b-kube-api-access-nh9b7\") on node \"crc\" DevicePath \"\"" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.648890 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" event={"ID":"064f1a68-bece-4a7c-b759-f73831fe100b","Type":"ContainerDied","Data":"8e0c3d4ae3cbafa91b0e25d2f2a166eb07cb077163947a6d2d2c9eeeff0db772"} Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.649267 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e0c3d4ae3cbafa91b0e25d2f2a166eb07cb077163947a6d2d2c9eeeff0db772" Dec 09 12:18:28 crc kubenswrapper[4970]: I1209 12:18:28.648940 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.548096 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf"] Dec 09 12:18:39 crc kubenswrapper[4970]: E1209 12:18:39.548917 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="pull" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.548933 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="pull" Dec 09 12:18:39 crc kubenswrapper[4970]: E1209 12:18:39.548957 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="util" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.548964 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="util" Dec 09 12:18:39 crc kubenswrapper[4970]: E1209 12:18:39.548977 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="pull" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.548984 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="pull" Dec 09 12:18:39 crc kubenswrapper[4970]: E1209 12:18:39.548996 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="extract" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.549005 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="extract" Dec 09 12:18:39 crc kubenswrapper[4970]: E1209 12:18:39.549014 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="extract" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.549021 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="extract" Dec 09 12:18:39 crc kubenswrapper[4970]: E1209 12:18:39.549035 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="util" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.549042 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="util" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.549217 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="064f1a68-bece-4a7c-b759-f73831fe100b" containerName="extract" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.549236 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7137ec19-d7d4-44d9-b9ac-6c30c1ec095c" containerName="extract" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.549953 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.555807 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.556270 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.557089 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.557348 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-6h5zp" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.557399 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.570157 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.584376 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf"] Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.691900 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b71fdf7a-ee27-4791-9b67-326b387accae-manager-config\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.691944 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkks\" (UniqueName: \"kubernetes.io/projected/b71fdf7a-ee27-4791-9b67-326b387accae-kube-api-access-lqkks\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.692076 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-apiservice-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.692169 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-webhook-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.692278 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.793126 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b71fdf7a-ee27-4791-9b67-326b387accae-manager-config\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.793180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqkks\" (UniqueName: \"kubernetes.io/projected/b71fdf7a-ee27-4791-9b67-326b387accae-kube-api-access-lqkks\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.793210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-apiservice-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.793262 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-webhook-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.793363 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.794098 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b71fdf7a-ee27-4791-9b67-326b387accae-manager-config\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.799454 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-apiservice-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.799750 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-webhook-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.800850 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b71fdf7a-ee27-4791-9b67-326b387accae-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.813154 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqkks\" (UniqueName: \"kubernetes.io/projected/b71fdf7a-ee27-4791-9b67-326b387accae-kube-api-access-lqkks\") pod \"loki-operator-controller-manager-695fd4cd57-pk8wf\" (UID: \"b71fdf7a-ee27-4791-9b67-326b387accae\") " pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:39 crc kubenswrapper[4970]: I1209 12:18:39.872153 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:40 crc kubenswrapper[4970]: I1209 12:18:40.359020 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf"] Dec 09 12:18:40 crc kubenswrapper[4970]: I1209 12:18:40.735282 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" event={"ID":"b71fdf7a-ee27-4791-9b67-326b387accae","Type":"ContainerStarted","Data":"26ea701249dabee1fa12256a96841d7361011c30d76852d5b4580454552b62ab"} Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.858382 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-4tr6l"] Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.866382 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-4tr6l"] Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.866491 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.868847 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.871467 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.872633 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-57x69" Dec 09 12:18:42 crc kubenswrapper[4970]: I1209 12:18:42.935218 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmt6t\" (UniqueName: \"kubernetes.io/projected/f4784e64-4938-4478-808e-f17b945fcd60-kube-api-access-kmt6t\") pod \"cluster-logging-operator-ff9846bd-4tr6l\" (UID: \"f4784e64-4938-4478-808e-f17b945fcd60\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" Dec 09 12:18:43 crc kubenswrapper[4970]: I1209 12:18:43.036912 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmt6t\" (UniqueName: \"kubernetes.io/projected/f4784e64-4938-4478-808e-f17b945fcd60-kube-api-access-kmt6t\") pod \"cluster-logging-operator-ff9846bd-4tr6l\" (UID: \"f4784e64-4938-4478-808e-f17b945fcd60\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" Dec 09 12:18:43 crc kubenswrapper[4970]: I1209 12:18:43.055345 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmt6t\" (UniqueName: \"kubernetes.io/projected/f4784e64-4938-4478-808e-f17b945fcd60-kube-api-access-kmt6t\") pod \"cluster-logging-operator-ff9846bd-4tr6l\" (UID: \"f4784e64-4938-4478-808e-f17b945fcd60\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" Dec 09 12:18:43 crc kubenswrapper[4970]: I1209 12:18:43.194117 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" Dec 09 12:18:43 crc kubenswrapper[4970]: I1209 12:18:43.691392 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-4tr6l"] Dec 09 12:18:43 crc kubenswrapper[4970]: I1209 12:18:43.755725 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" event={"ID":"f4784e64-4938-4478-808e-f17b945fcd60","Type":"ContainerStarted","Data":"b8dca6211db9ee23727e2058af69801084b6a5c5a0112f849a66471aedd180c1"} Dec 09 12:18:46 crc kubenswrapper[4970]: I1209 12:18:46.011146 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:18:46 crc kubenswrapper[4970]: I1209 12:18:46.011556 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:18:50 crc kubenswrapper[4970]: I1209 12:18:50.820558 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" event={"ID":"b71fdf7a-ee27-4791-9b67-326b387accae","Type":"ContainerStarted","Data":"e642d131c9d3078e3ea99be3df744ae9598656bbecc12972a85ad1a35488f21f"} Dec 09 12:18:58 crc kubenswrapper[4970]: I1209 12:18:58.884315 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" event={"ID":"b71fdf7a-ee27-4791-9b67-326b387accae","Type":"ContainerStarted","Data":"1f7021b31ac3544edb0cc1a184e7ff3e9b85f8675488ace548eb45f219b8b704"} Dec 09 12:18:58 crc kubenswrapper[4970]: I1209 12:18:58.884868 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:58 crc kubenswrapper[4970]: I1209 12:18:58.886290 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" event={"ID":"f4784e64-4938-4478-808e-f17b945fcd60","Type":"ContainerStarted","Data":"629f400f34411989b942d5ba03a3a1187af07c8d4127758e0fdd592e5ead3962"} Dec 09 12:18:58 crc kubenswrapper[4970]: I1209 12:18:58.887198 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" Dec 09 12:18:58 crc kubenswrapper[4970]: I1209 12:18:58.913443 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-695fd4cd57-pk8wf" podStartSLOduration=2.344638735 podStartE2EDuration="19.913411074s" podCreationTimestamp="2025-12-09 12:18:39 +0000 UTC" firstStartedPulling="2025-12-09 12:18:40.370148746 +0000 UTC m=+732.930629797" lastFinishedPulling="2025-12-09 12:18:57.938921085 +0000 UTC m=+750.499402136" observedRunningTime="2025-12-09 12:18:58.901546716 +0000 UTC m=+751.462027817" watchObservedRunningTime="2025-12-09 12:18:58.913411074 +0000 UTC m=+751.473892165" Dec 09 12:18:58 crc kubenswrapper[4970]: I1209 12:18:58.960224 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-4tr6l" podStartSLOduration=2.848210293 podStartE2EDuration="16.96020596s" podCreationTimestamp="2025-12-09 12:18:42 +0000 UTC" firstStartedPulling="2025-12-09 12:18:43.685670339 +0000 UTC m=+736.246151400" lastFinishedPulling="2025-12-09 12:18:57.797666016 +0000 UTC m=+750.358147067" observedRunningTime="2025-12-09 12:18:58.958321449 +0000 UTC m=+751.518802590" watchObservedRunningTime="2025-12-09 12:18:58.96020596 +0000 UTC m=+751.520687011" Dec 09 12:18:59 crc kubenswrapper[4970]: I1209 12:18:59.028473 4970 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.540800 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.542431 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.544659 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.545134 4970 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-5xxmh" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.545336 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.549576 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.663904 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") " pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.663977 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzzh9\" (UniqueName: \"kubernetes.io/projected/cf4ff4d5-33b4-4be5-9efe-3e16f39c4817-kube-api-access-jzzh9\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") " pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.766119 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") " pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.766170 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzzh9\" (UniqueName: \"kubernetes.io/projected/cf4ff4d5-33b4-4be5-9efe-3e16f39c4817-kube-api-access-jzzh9\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") " pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.769841 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.769884 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03c903584984c49610b8afe49ddcac9341bbbc4a801fb5fde0c5802e8da76311/globalmount\"" pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.798874 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzzh9\" (UniqueName: \"kubernetes.io/projected/cf4ff4d5-33b4-4be5-9efe-3e16f39c4817-kube-api-access-jzzh9\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") " pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.814101 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-532f8cd1-a8ef-4143-90e2-685924f9e538\") pod \"minio\" (UID: \"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817\") " pod="minio-dev/minio" Dec 09 12:19:03 crc kubenswrapper[4970]: I1209 12:19:03.859481 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Dec 09 12:19:04 crc kubenswrapper[4970]: I1209 12:19:04.257956 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Dec 09 12:19:04 crc kubenswrapper[4970]: I1209 12:19:04.921997 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817","Type":"ContainerStarted","Data":"e313e1e996b61747b17283afe0549d1c01a40cc62cd3003d2216470fa98f97b9"} Dec 09 12:19:08 crc kubenswrapper[4970]: I1209 12:19:08.946269 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"cf4ff4d5-33b4-4be5-9efe-3e16f39c4817","Type":"ContainerStarted","Data":"1069a9911bcdba57653ba7ee2dbc354fe0f2d6dbe4fdfb39873de63795b618c1"} Dec 09 12:19:08 crc kubenswrapper[4970]: I1209 12:19:08.960933 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.358489041 podStartE2EDuration="8.960914849s" podCreationTimestamp="2025-12-09 12:19:00 +0000 UTC" firstStartedPulling="2025-12-09 12:19:04.269607314 +0000 UTC m=+756.830088365" lastFinishedPulling="2025-12-09 12:19:07.872033112 +0000 UTC m=+760.432514173" observedRunningTime="2025-12-09 12:19:08.958616988 +0000 UTC m=+761.519098049" watchObservedRunningTime="2025-12-09 12:19:08.960914849 +0000 UTC m=+761.521395900" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.439269 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-kh68l"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.442859 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.445881 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.445881 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.446270 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.446447 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.458629 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-9dn5t" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.464839 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-kh68l"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.583063 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-pj2ck"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.583918 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.586156 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.586943 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.587402 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.588426 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.588470 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.588593 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-config\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.588681 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.588739 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjp8\" (UniqueName: \"kubernetes.io/projected/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-kube-api-access-zdjp8\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.608893 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-pj2ck"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.671373 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.672338 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.677796 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.683479 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.689317 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.689851 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.689914 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjp8\" (UniqueName: \"kubernetes.io/projected/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-kube-api-access-zdjp8\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.689944 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690010 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690050 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690073 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690093 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690150 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4976c0-636d-4915-97ae-f8fe8cfebb95-config\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl96l\" (UniqueName: \"kubernetes.io/projected/5a4976c0-636d-4915-97ae-f8fe8cfebb95-kube-api-access-fl96l\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690199 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.690226 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-config\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.691953 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-config\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.695730 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.703083 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.708872 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.716972 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjp8\" (UniqueName: \"kubernetes.io/projected/8f98b088-8ae1-4d5a-9917-36d3c95bf08f-kube-api-access-zdjp8\") pod \"logging-loki-distributor-76cc67bf56-kh68l\" (UID: \"8f98b088-8ae1-4d5a-9917-36d3c95bf08f\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.778294 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.791989 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792060 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792104 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4976c0-636d-4915-97ae-f8fe8cfebb95-config\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792126 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl96l\" (UniqueName: \"kubernetes.io/projected/5a4976c0-636d-4915-97ae-f8fe8cfebb95-kube-api-access-fl96l\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792147 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792192 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792219 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792314 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9brjr\" (UniqueName: \"kubernetes.io/projected/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-kube-api-access-9brjr\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792366 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792397 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.792431 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-config\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.793127 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a4976c0-636d-4915-97ae-f8fe8cfebb95-config\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.797035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.797591 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.802028 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.814354 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.815627 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.824137 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/5a4976c0-636d-4915-97ae-f8fe8cfebb95-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.824797 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.825458 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-n9fnl" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.827157 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.838564 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.839949 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.841738 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.841872 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.841937 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.874286 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl96l\" (UniqueName: \"kubernetes.io/projected/5a4976c0-636d-4915-97ae-f8fe8cfebb95-kube-api-access-fl96l\") pod \"logging-loki-querier-5895d59bb8-pj2ck\" (UID: \"5a4976c0-636d-4915-97ae-f8fe8cfebb95\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.893877 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897047 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897115 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897149 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-rbac\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897196 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897227 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9brjr\" (UniqueName: \"kubernetes.io/projected/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-kube-api-access-9brjr\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897288 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897333 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-lokistack-gateway\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897420 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbx24\" (UniqueName: \"kubernetes.io/projected/e564b02f-7c51-411f-9c17-6a8e9aa357d0-kube-api-access-xbx24\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897507 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897553 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-config\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897580 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tenants\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897669 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.897758 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.899228 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-config\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.900048 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.903158 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.904982 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.906930 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.936465 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9brjr\" (UniqueName: \"kubernetes.io/projected/6c30caa0-938a-4ffc-b8e2-0c418d04e6f7-kube-api-access-9brjr\") pod \"logging-loki-query-frontend-84558f7c9f-zmggt\" (UID: \"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.958780 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv"] Dec 09 12:19:12 crc kubenswrapper[4970]: I1209 12:19:12.996637 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004364 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004420 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-lokistack-gateway\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004446 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbx24\" (UniqueName: \"kubernetes.io/projected/e564b02f-7c51-411f-9c17-6a8e9aa357d0-kube-api-access-xbx24\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004471 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004501 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tenants\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004525 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tenants\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004564 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004592 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004643 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004678 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004706 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004737 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-rbac\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-lokistack-gateway\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004792 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxqv\" (UniqueName: \"kubernetes.io/projected/570b42ae-35db-456f-933f-728031536759-kube-api-access-bcxqv\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004832 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.004859 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-rbac\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.005613 4970 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.005692 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tls-secret podName:e564b02f-7c51-411f-9c17-6a8e9aa357d0 nodeName:}" failed. No retries permitted until 2025-12-09 12:19:13.505671302 +0000 UTC m=+766.066152353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tls-secret") pod "logging-loki-gateway-7c9c75f6cc-v4gvv" (UID: "e564b02f-7c51-411f-9c17-6a8e9aa357d0") : secret "logging-loki-gateway-http" not found Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.006091 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-lokistack-gateway\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.006656 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.006746 4970 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.006782 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-ca-bundle podName:e564b02f-7c51-411f-9c17-6a8e9aa357d0 nodeName:}" failed. No retries permitted until 2025-12-09 12:19:13.506769232 +0000 UTC m=+766.067250283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-7c9c75f6cc-v4gvv" (UID: "e564b02f-7c51-411f-9c17-6a8e9aa357d0") : configmap "logging-loki-gateway-ca-bundle" not found Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.007346 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-rbac\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.013301 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tenants\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.014066 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.041290 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbx24\" (UniqueName: \"kubernetes.io/projected/e564b02f-7c51-411f-9c17-6a8e9aa357d0-kube-api-access-xbx24\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.106783 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.106909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-lokistack-gateway\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.106934 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxqv\" (UniqueName: \"kubernetes.io/projected/570b42ae-35db-456f-933f-728031536759-kube-api-access-bcxqv\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.106972 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.106995 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-rbac\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.107019 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.107044 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tenants\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.107070 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.107716 4970 configmap.go:193] Couldn't get configMap openshift-logging/logging-loki-gateway-ca-bundle: configmap "logging-loki-gateway-ca-bundle" not found Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.107790 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-ca-bundle podName:570b42ae-35db-456f-933f-728031536759 nodeName:}" failed. No retries permitted until 2025-12-09 12:19:13.607772441 +0000 UTC m=+766.168253492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "logging-loki-gateway-ca-bundle" (UniqueName: "kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-ca-bundle") pod "logging-loki-gateway-7c9c75f6cc-rxvv4" (UID: "570b42ae-35db-456f-933f-728031536759") : configmap "logging-loki-gateway-ca-bundle" not found Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.108703 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.109446 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-lokistack-gateway\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.109885 4970 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Dec 09 12:19:13 crc kubenswrapper[4970]: E1209 12:19:13.109923 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tls-secret podName:570b42ae-35db-456f-933f-728031536759 nodeName:}" failed. No retries permitted until 2025-12-09 12:19:13.609909719 +0000 UTC m=+766.170390770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tls-secret") pod "logging-loki-gateway-7c9c75f6cc-rxvv4" (UID: "570b42ae-35db-456f-933f-728031536759") : secret "logging-loki-gateway-http" not found Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.110414 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.110743 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-rbac\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.120177 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tenants\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.130935 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxqv\" (UniqueName: \"kubernetes.io/projected/570b42ae-35db-456f-933f-728031536759-kube-api-access-bcxqv\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.427860 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-kh68l"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.513133 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.513187 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.513906 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e564b02f-7c51-411f-9c17-6a8e9aa357d0-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.516529 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-pj2ck"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.519716 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/e564b02f-7c51-411f-9c17-6a8e9aa357d0-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-v4gvv\" (UID: \"e564b02f-7c51-411f-9c17-6a8e9aa357d0\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.525560 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:13 crc kubenswrapper[4970]: W1209 12:19:13.525801 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a4976c0_636d_4915_97ae_f8fe8cfebb95.slice/crio-b48e311bba4ea102e77e33f61560116b188954df4a85387fb723d37222c636f0 WatchSource:0}: Error finding container b48e311bba4ea102e77e33f61560116b188954df4a85387fb723d37222c636f0: Status 404 returned error can't find the container with id b48e311bba4ea102e77e33f61560116b188954df4a85387fb723d37222c636f0 Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.562900 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.571509 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.572439 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.575076 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.575285 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.578376 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.615022 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.615081 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.616258 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/570b42ae-35db-456f-933f-728031536759-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.619384 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/570b42ae-35db-456f-933f-728031536759-tls-secret\") pod \"logging-loki-gateway-7c9c75f6cc-rxvv4\" (UID: \"570b42ae-35db-456f-933f-728031536759\") " pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.660417 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.661437 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.664649 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.665045 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.678115 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.717909 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773f5e49-3672-4662-a3e7-cfddb3f3ded6-config\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.717951 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.717977 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq2vb\" (UniqueName: \"kubernetes.io/projected/773f5e49-3672-4662-a3e7-cfddb3f3ded6-kube-api-access-nq2vb\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.718041 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.718060 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.718076 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.718099 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.718141 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.722921 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.723829 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.728810 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.729139 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.733824 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821228 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821539 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821679 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq2vb\" (UniqueName: \"kubernetes.io/projected/773f5e49-3672-4662-a3e7-cfddb3f3ded6-kube-api-access-nq2vb\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821744 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821775 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821825 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821852 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821882 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.821907 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-config\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822022 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822086 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p697f\" (UniqueName: \"kubernetes.io/projected/c95ea220-561f-4069-8a68-cda2d45c834b-kube-api-access-p697f\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822309 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773f5e49-3672-4662-a3e7-cfddb3f3ded6-config\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822390 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822460 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hz28\" (UniqueName: \"kubernetes.io/projected/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-kube-api-access-9hz28\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822522 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95ea220-561f-4069-8a68-cda2d45c834b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822548 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822573 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822635 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822709 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.822823 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.823115 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.823690 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/773f5e49-3672-4662-a3e7-cfddb3f3ded6-config\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.824523 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.825023 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.827916 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.827957 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/cc1bebce94731f4e92cb2eab693d8df1a45920f1c9ca24f8a0e0cc6a6fc029bc/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.833553 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.833711 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d8adb0cf421fa1ee372182f11553552956b42cedf3857f0933f0addf9ad8f12/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.838726 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/773f5e49-3672-4662-a3e7-cfddb3f3ded6-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.842237 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq2vb\" (UniqueName: \"kubernetes.io/projected/773f5e49-3672-4662-a3e7-cfddb3f3ded6-kube-api-access-nq2vb\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.861836 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-34d6b7dc-1757-4e9c-8ca1-365a0e5f3719\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.870828 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6423dc60-e3c7-4d59-af5e-5bff824b0511\") pod \"logging-loki-ingester-0\" (UID: \"773f5e49-3672-4662-a3e7-cfddb3f3ded6\") " pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.871664 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.892890 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924311 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p697f\" (UniqueName: \"kubernetes.io/projected/c95ea220-561f-4069-8a68-cda2d45c834b-kube-api-access-p697f\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924550 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924583 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hz28\" (UniqueName: \"kubernetes.io/projected/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-kube-api-access-9hz28\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924608 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95ea220-561f-4069-8a68-cda2d45c834b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924624 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924644 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924666 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924693 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924737 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924776 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924803 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924821 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.924841 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-config\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.926134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-config\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.926351 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.926600 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c95ea220-561f-4069-8a68-cda2d45c834b-config\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.927454 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.929331 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.929867 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.934724 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.934819 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.935058 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.935328 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/c95ea220-561f-4069-8a68-cda2d45c834b-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.936404 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.936436 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c99507852a927653efb643b007113ef5f8aca27f94ca9c9e0b0744279703b40/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.938381 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.938438 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c5065f524e63bb927865a70f5cd4befe23e0a4f3094d15510d6bcb36e02a6b59/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.952374 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hz28\" (UniqueName: \"kubernetes.io/projected/509f9ecc-ffea-4205-b9af-3fca1ca6f58d-kube-api-access-9hz28\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.954541 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p697f\" (UniqueName: \"kubernetes.io/projected/c95ea220-561f-4069-8a68-cda2d45c834b-kube-api-access-p697f\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.968603 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-eb682481-aadb-408a-b2e0-691d36d7d4fb\") pod \"logging-loki-index-gateway-0\" (UID: \"c95ea220-561f-4069-8a68-cda2d45c834b\") " pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:13 crc kubenswrapper[4970]: I1209 12:19:13.972096 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv"] Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.004426 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" event={"ID":"5a4976c0-636d-4915-97ae-f8fe8cfebb95","Type":"ContainerStarted","Data":"b48e311bba4ea102e77e33f61560116b188954df4a85387fb723d37222c636f0"} Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.005876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" event={"ID":"8f98b088-8ae1-4d5a-9917-36d3c95bf08f","Type":"ContainerStarted","Data":"e42d7577423b080bcfa5853addaab8155086f66a1cc6ddbbf19eb49f76850830"} Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.006209 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d868cee-7190-4ddd-8be2-c5615ee47f2d\") pod \"logging-loki-compactor-0\" (UID: \"509f9ecc-ffea-4205-b9af-3fca1ca6f58d\") " pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.007913 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" event={"ID":"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7","Type":"ContainerStarted","Data":"1405e14238780aef0f82a090a9ae358c40264740f9f9a92a00a7cd57c07b2716"} Dec 09 12:19:14 crc kubenswrapper[4970]: W1209 12:19:14.007968 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode564b02f_7c51_411f_9c17_6a8e9aa357d0.slice/crio-65bd938bfae432b4f3b7bb578ca4188d355cef3435e39fa13049fcd5ba9504ea WatchSource:0}: Error finding container 65bd938bfae432b4f3b7bb578ca4188d355cef3435e39fa13049fcd5ba9504ea: Status 404 returned error can't find the container with id 65bd938bfae432b4f3b7bb578ca4188d355cef3435e39fa13049fcd5ba9504ea Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.054050 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.282833 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.311979 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4"] Dec 09 12:19:14 crc kubenswrapper[4970]: W1209 12:19:14.317612 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod570b42ae_35db_456f_933f_728031536759.slice/crio-5c3ad8f8ae4cc3755ec35c44c0e2eff9b80230f84bdbe7eefe85201fdccdd410 WatchSource:0}: Error finding container 5c3ad8f8ae4cc3755ec35c44c0e2eff9b80230f84bdbe7eefe85201fdccdd410: Status 404 returned error can't find the container with id 5c3ad8f8ae4cc3755ec35c44c0e2eff9b80230f84bdbe7eefe85201fdccdd410 Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.351430 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Dec 09 12:19:14 crc kubenswrapper[4970]: W1209 12:19:14.358436 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod773f5e49_3672_4662_a3e7_cfddb3f3ded6.slice/crio-ec8b26b8aa1cd8c0b27ba4d3ed65d17163d3fdbf2d1c1bd3e81fee28399fbe8a WatchSource:0}: Error finding container ec8b26b8aa1cd8c0b27ba4d3ed65d17163d3fdbf2d1c1bd3e81fee28399fbe8a: Status 404 returned error can't find the container with id ec8b26b8aa1cd8c0b27ba4d3ed65d17163d3fdbf2d1c1bd3e81fee28399fbe8a Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.459978 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Dec 09 12:19:14 crc kubenswrapper[4970]: I1209 12:19:14.477616 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Dec 09 12:19:14 crc kubenswrapper[4970]: W1209 12:19:14.486521 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod509f9ecc_ffea_4205_b9af_3fca1ca6f58d.slice/crio-e20b97491592cbca47c4ce766ed1656d7c1744f2d93b50fe3b262bd786690fea WatchSource:0}: Error finding container e20b97491592cbca47c4ce766ed1656d7c1744f2d93b50fe3b262bd786690fea: Status 404 returned error can't find the container with id e20b97491592cbca47c4ce766ed1656d7c1744f2d93b50fe3b262bd786690fea Dec 09 12:19:15 crc kubenswrapper[4970]: I1209 12:19:15.016844 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" event={"ID":"570b42ae-35db-456f-933f-728031536759","Type":"ContainerStarted","Data":"5c3ad8f8ae4cc3755ec35c44c0e2eff9b80230f84bdbe7eefe85201fdccdd410"} Dec 09 12:19:15 crc kubenswrapper[4970]: I1209 12:19:15.017774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"509f9ecc-ffea-4205-b9af-3fca1ca6f58d","Type":"ContainerStarted","Data":"e20b97491592cbca47c4ce766ed1656d7c1744f2d93b50fe3b262bd786690fea"} Dec 09 12:19:15 crc kubenswrapper[4970]: I1209 12:19:15.018455 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" event={"ID":"e564b02f-7c51-411f-9c17-6a8e9aa357d0","Type":"ContainerStarted","Data":"65bd938bfae432b4f3b7bb578ca4188d355cef3435e39fa13049fcd5ba9504ea"} Dec 09 12:19:15 crc kubenswrapper[4970]: I1209 12:19:15.019349 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"c95ea220-561f-4069-8a68-cda2d45c834b","Type":"ContainerStarted","Data":"dae4bf9125e561e061296d205374e92d1515c143723f11e2573fcf166026dcb5"} Dec 09 12:19:15 crc kubenswrapper[4970]: I1209 12:19:15.020506 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"773f5e49-3672-4662-a3e7-cfddb3f3ded6","Type":"ContainerStarted","Data":"ec8b26b8aa1cd8c0b27ba4d3ed65d17163d3fdbf2d1c1bd3e81fee28399fbe8a"} Dec 09 12:19:16 crc kubenswrapper[4970]: I1209 12:19:16.010750 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:19:16 crc kubenswrapper[4970]: I1209 12:19:16.011302 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:19:16 crc kubenswrapper[4970]: I1209 12:19:16.011367 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:19:16 crc kubenswrapper[4970]: I1209 12:19:16.012195 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a43c19be68e1c39db2fa2acbdc1174785af9071889cf2a0715c26d8f86ac8be"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:19:16 crc kubenswrapper[4970]: I1209 12:19:16.012350 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://8a43c19be68e1c39db2fa2acbdc1174785af9071889cf2a0715c26d8f86ac8be" gracePeriod=600 Dec 09 12:19:23 crc kubenswrapper[4970]: I1209 12:19:23.794486 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-rtdjh_a283668d-a884-4d62-95e2-1f0ae672f61c/machine-config-daemon/3.log" Dec 09 12:19:23 crc kubenswrapper[4970]: I1209 12:19:23.795864 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="8a43c19be68e1c39db2fa2acbdc1174785af9071889cf2a0715c26d8f86ac8be" exitCode=-1 Dec 09 12:19:23 crc kubenswrapper[4970]: I1209 12:19:23.795914 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"8a43c19be68e1c39db2fa2acbdc1174785af9071889cf2a0715c26d8f86ac8be"} Dec 09 12:19:23 crc kubenswrapper[4970]: I1209 12:19:23.795962 4970 scope.go:117] "RemoveContainer" containerID="414e7804f598b3aefedf34c5b0fdde4ed9406e7d4b0a5b8d5d6b44c8067d40b9" Dec 09 12:19:26 crc kubenswrapper[4970]: E1209 12:19:26.088072 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage846010270/1\": happened during read: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:1bd60df77d8be8eae3551f68a3a55a464610be839b0c0556600c7f1a36887919" Dec 09 12:19:26 crc kubenswrapper[4970]: E1209 12:19:26.088572 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:1bd60df77d8be8eae3551f68a3a55a464610be839b0c0556600c7f1a36887919,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logging-loki-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdjp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod logging-loki-distributor-76cc67bf56-kh68l_openshift-logging(8f98b088-8ae1-4d5a-9917-36d3c95bf08f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage846010270/1\": happened during read: context canceled" logger="UnhandledError" Dec 09 12:19:26 crc kubenswrapper[4970]: E1209 12:19:26.090214 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage846010270/1\\\": happened during read: context canceled\"" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" podUID="8f98b088-8ae1-4d5a-9917-36d3c95bf08f" Dec 09 12:19:26 crc kubenswrapper[4970]: E1209 12:19:26.868000 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:1bd60df77d8be8eae3551f68a3a55a464610be839b0c0556600c7f1a36887919\\\"\"" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" podUID="8f98b088-8ae1-4d5a-9917-36d3c95bf08f" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.861646 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"cba24c2dd5398483042c9e88f615ca653704b38e348947e32056cf594c3cf93e"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.864193 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"c95ea220-561f-4069-8a68-cda2d45c834b","Type":"ContainerStarted","Data":"9bc468dada71c39b008a079bc457b0c142cd71546beb41150d4886f669dae7b8"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.864364 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.865645 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" event={"ID":"5a4976c0-636d-4915-97ae-f8fe8cfebb95","Type":"ContainerStarted","Data":"3c03b70e82a9eb51e39ecf04dab5bc77c5cf2dd04a729625e8393109bf89ffc1"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.865784 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.867496 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"773f5e49-3672-4662-a3e7-cfddb3f3ded6","Type":"ContainerStarted","Data":"5834e1bd10f0b428375fba4b552c085084e345e60f16ddd25f2c83890150a5e8"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.867613 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.868800 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" event={"ID":"570b42ae-35db-456f-933f-728031536759","Type":"ContainerStarted","Data":"6ba93ce71dbcb15613c271a0bcd973a6057640f5e4ae859fc618c8568ee725f4"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.870494 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"509f9ecc-ffea-4205-b9af-3fca1ca6f58d","Type":"ContainerStarted","Data":"cd14974b625badf2a45ea77d7893ff5a40f3ae9a7a54079adf06d3afa6045d69"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.870582 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.872362 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" event={"ID":"e564b02f-7c51-411f-9c17-6a8e9aa357d0","Type":"ContainerStarted","Data":"481b088f44c65f54d7709d0772864f55e32e45b42903e572a7e2698c5e30fdc0"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.873931 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" event={"ID":"6c30caa0-938a-4ffc-b8e2-0c418d04e6f7","Type":"ContainerStarted","Data":"db30118d48727985eaedde3fe8affccf735026eaeceb4b2c0ad1786cf9cbba92"} Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.874111 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.899758 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" podStartSLOduration=2.21131243 podStartE2EDuration="19.899740713s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:13.540354094 +0000 UTC m=+766.100835155" lastFinishedPulling="2025-12-09 12:19:31.228782387 +0000 UTC m=+783.789263438" observedRunningTime="2025-12-09 12:19:31.894896744 +0000 UTC m=+784.455377795" watchObservedRunningTime="2025-12-09 12:19:31.899740713 +0000 UTC m=+784.460221764" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.918421 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" podStartSLOduration=2.280250946 podStartE2EDuration="19.918404721s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:13.580404599 +0000 UTC m=+766.140885650" lastFinishedPulling="2025-12-09 12:19:31.218558384 +0000 UTC m=+783.779039425" observedRunningTime="2025-12-09 12:19:31.912789451 +0000 UTC m=+784.473270502" watchObservedRunningTime="2025-12-09 12:19:31.918404721 +0000 UTC m=+784.478885772" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.934729 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.34935675 podStartE2EDuration="19.934704556s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:14.488871976 +0000 UTC m=+767.049353027" lastFinishedPulling="2025-12-09 12:19:31.074219782 +0000 UTC m=+783.634700833" observedRunningTime="2025-12-09 12:19:31.9307362 +0000 UTC m=+784.491217261" watchObservedRunningTime="2025-12-09 12:19:31.934704556 +0000 UTC m=+784.495185607" Dec 09 12:19:31 crc kubenswrapper[4970]: I1209 12:19:31.967192 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.215266248 podStartE2EDuration="19.967174203s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:14.466582368 +0000 UTC m=+767.027063419" lastFinishedPulling="2025-12-09 12:19:31.218490273 +0000 UTC m=+783.778971374" observedRunningTime="2025-12-09 12:19:31.960435363 +0000 UTC m=+784.520916544" watchObservedRunningTime="2025-12-09 12:19:31.967174203 +0000 UTC m=+784.527655254" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.892987 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" event={"ID":"e564b02f-7c51-411f-9c17-6a8e9aa357d0","Type":"ContainerStarted","Data":"883885bfc2eb65c568da9d031200b6b026c87f2c978721e12868aa84ffa2b85b"} Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.893581 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.893602 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.894664 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" event={"ID":"570b42ae-35db-456f-933f-728031536759","Type":"ContainerStarted","Data":"50f28104ad359ee9cae50d92e20be93ec90d0c0278022bb1be48ab16a8ee2e88"} Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.894875 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.907649 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.908072 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.910502 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.920088 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=5.066295963 podStartE2EDuration="21.920068889s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:14.365696913 +0000 UTC m=+766.926177964" lastFinishedPulling="2025-12-09 12:19:31.219469839 +0000 UTC m=+783.779950890" observedRunningTime="2025-12-09 12:19:31.982659856 +0000 UTC m=+784.543140907" watchObservedRunningTime="2025-12-09 12:19:33.920068889 +0000 UTC m=+786.480549940" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.939957 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-v4gvv" podStartSLOduration=3.048510677 podStartE2EDuration="21.939937499s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:14.010520736 +0000 UTC m=+766.571001777" lastFinishedPulling="2025-12-09 12:19:32.901947548 +0000 UTC m=+785.462428599" observedRunningTime="2025-12-09 12:19:33.918307232 +0000 UTC m=+786.478788283" watchObservedRunningTime="2025-12-09 12:19:33.939937499 +0000 UTC m=+786.500418550" Dec 09 12:19:33 crc kubenswrapper[4970]: I1209 12:19:33.940094 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" podStartSLOduration=3.35842159 podStartE2EDuration="21.940089213s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:14.321367244 +0000 UTC m=+766.881848295" lastFinishedPulling="2025-12-09 12:19:32.903034867 +0000 UTC m=+785.463515918" observedRunningTime="2025-12-09 12:19:33.936889428 +0000 UTC m=+786.497370479" watchObservedRunningTime="2025-12-09 12:19:33.940089213 +0000 UTC m=+786.500570284" Dec 09 12:19:34 crc kubenswrapper[4970]: I1209 12:19:34.902624 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:34 crc kubenswrapper[4970]: I1209 12:19:34.916080 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-7c9c75f6cc-rxvv4" Dec 09 12:19:41 crc kubenswrapper[4970]: I1209 12:19:41.949036 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" event={"ID":"8f98b088-8ae1-4d5a-9917-36d3c95bf08f","Type":"ContainerStarted","Data":"f4b7a215a1a8609339ccd66b81d41b8e7dd3da23f0d95b2ee14ac4c8a5e83fd3"} Dec 09 12:19:41 crc kubenswrapper[4970]: I1209 12:19:41.950173 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:19:41 crc kubenswrapper[4970]: I1209 12:19:41.972590 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" podStartSLOduration=-9223372006.882208 podStartE2EDuration="29.972568574s" podCreationTimestamp="2025-12-09 12:19:12 +0000 UTC" firstStartedPulling="2025-12-09 12:19:13.433373405 +0000 UTC m=+765.993854456" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:19:41.96977915 +0000 UTC m=+794.530260201" watchObservedRunningTime="2025-12-09 12:19:41.972568574 +0000 UTC m=+794.533049625" Dec 09 12:19:52 crc kubenswrapper[4970]: I1209 12:19:52.916018 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-pj2ck" Dec 09 12:19:53 crc kubenswrapper[4970]: I1209 12:19:53.002936 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-zmggt" Dec 09 12:19:53 crc kubenswrapper[4970]: I1209 12:19:53.897757 4970 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Dec 09 12:19:53 crc kubenswrapper[4970]: I1209 12:19:53.898100 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="773f5e49-3672-4662-a3e7-cfddb3f3ded6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:19:54 crc kubenswrapper[4970]: I1209 12:19:54.064300 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Dec 09 12:19:54 crc kubenswrapper[4970]: I1209 12:19:54.289386 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Dec 09 12:20:02 crc kubenswrapper[4970]: I1209 12:20:02.784783 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-kh68l" Dec 09 12:20:03 crc kubenswrapper[4970]: I1209 12:20:03.897404 4970 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Dec 09 12:20:03 crc kubenswrapper[4970]: I1209 12:20:03.898517 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="773f5e49-3672-4662-a3e7-cfddb3f3ded6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:20:13 crc kubenswrapper[4970]: I1209 12:20:13.899101 4970 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Dec 09 12:20:13 crc kubenswrapper[4970]: I1209 12:20:13.900420 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="773f5e49-3672-4662-a3e7-cfddb3f3ded6" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:20:23 crc kubenswrapper[4970]: I1209 12:20:23.897713 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.456891 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjnfx"] Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.459736 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.489668 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjnfx"] Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.497937 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwtq6\" (UniqueName: \"kubernetes.io/projected/ab2f3090-38c2-46df-a717-503643a85a34-kube-api-access-cwtq6\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.498036 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-catalog-content\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.498089 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-utilities\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.600006 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwtq6\" (UniqueName: \"kubernetes.io/projected/ab2f3090-38c2-46df-a717-503643a85a34-kube-api-access-cwtq6\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.600082 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-catalog-content\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.600122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-utilities\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.600655 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-catalog-content\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.600738 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-utilities\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.625065 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwtq6\" (UniqueName: \"kubernetes.io/projected/ab2f3090-38c2-46df-a717-503643a85a34-kube-api-access-cwtq6\") pod \"redhat-marketplace-mjnfx\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:25 crc kubenswrapper[4970]: I1209 12:20:25.784613 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:26 crc kubenswrapper[4970]: I1209 12:20:26.254858 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjnfx"] Dec 09 12:20:26 crc kubenswrapper[4970]: I1209 12:20:26.299106 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjnfx" event={"ID":"ab2f3090-38c2-46df-a717-503643a85a34","Type":"ContainerStarted","Data":"43b866529239c65a0e001f3aa87aba566f2c24f237924446497b0be9998d3211"} Dec 09 12:20:27 crc kubenswrapper[4970]: I1209 12:20:27.312809 4970 generic.go:334] "Generic (PLEG): container finished" podID="ab2f3090-38c2-46df-a717-503643a85a34" containerID="8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c" exitCode=0 Dec 09 12:20:27 crc kubenswrapper[4970]: I1209 12:20:27.312902 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjnfx" event={"ID":"ab2f3090-38c2-46df-a717-503643a85a34","Type":"ContainerDied","Data":"8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c"} Dec 09 12:20:29 crc kubenswrapper[4970]: I1209 12:20:29.331841 4970 generic.go:334] "Generic (PLEG): container finished" podID="ab2f3090-38c2-46df-a717-503643a85a34" containerID="6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c" exitCode=0 Dec 09 12:20:29 crc kubenswrapper[4970]: I1209 12:20:29.331955 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjnfx" event={"ID":"ab2f3090-38c2-46df-a717-503643a85a34","Type":"ContainerDied","Data":"6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c"} Dec 09 12:20:30 crc kubenswrapper[4970]: I1209 12:20:30.341394 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjnfx" event={"ID":"ab2f3090-38c2-46df-a717-503643a85a34","Type":"ContainerStarted","Data":"2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b"} Dec 09 12:20:30 crc kubenswrapper[4970]: I1209 12:20:30.362412 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjnfx" podStartSLOduration=2.881040708 podStartE2EDuration="5.362391017s" podCreationTimestamp="2025-12-09 12:20:25 +0000 UTC" firstStartedPulling="2025-12-09 12:20:27.315322811 +0000 UTC m=+839.875803862" lastFinishedPulling="2025-12-09 12:20:29.79667312 +0000 UTC m=+842.357154171" observedRunningTime="2025-12-09 12:20:30.356240193 +0000 UTC m=+842.916721264" watchObservedRunningTime="2025-12-09 12:20:30.362391017 +0000 UTC m=+842.922872068" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.159706 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-mqjqf"] Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.161121 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.167397 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.167753 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.168329 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.169411 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-q5hl5" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.169637 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.174766 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-syslog-receiver\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.174819 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnsx5\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-kube-api-access-bnsx5\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.174847 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5ec328f6-fba4-4377-8827-998fbfb39c19-datadir\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.174878 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-entrypoint\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.174894 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-token\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.175221 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-sa-token\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.175411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ec328f6-fba4-4377-8827-998fbfb39c19-tmp\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.175537 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.175605 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-trusted-ca\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.175735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config-openshift-service-cacrt\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.175799 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.178055 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.193476 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-mqjqf"] Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.265827 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-mqjqf"] Dec 09 12:20:34 crc kubenswrapper[4970]: E1209 12:20:34.267020 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-bnsx5 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-mqjqf" podUID="5ec328f6-fba4-4377-8827-998fbfb39c19" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282321 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-sa-token\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282426 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ec328f6-fba4-4377-8827-998fbfb39c19-tmp\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282453 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282496 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-trusted-ca\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282573 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config-openshift-service-cacrt\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282612 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282661 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-syslog-receiver\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282689 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnsx5\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-kube-api-access-bnsx5\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282737 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5ec328f6-fba4-4377-8827-998fbfb39c19-datadir\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282764 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-entrypoint\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.282782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-token\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: E1209 12:20:34.283029 4970 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Dec 09 12:20:34 crc kubenswrapper[4970]: E1209 12:20:34.283110 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics podName:5ec328f6-fba4-4377-8827-998fbfb39c19 nodeName:}" failed. No retries permitted until 2025-12-09 12:20:34.783092268 +0000 UTC m=+847.343573319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics") pod "collector-mqjqf" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19") : secret "collector-metrics" not found Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.284191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-trusted-ca\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.284239 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.284342 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5ec328f6-fba4-4377-8827-998fbfb39c19-datadir\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.284798 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config-openshift-service-cacrt\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.285023 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-entrypoint\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.291067 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-token\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.291526 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-syslog-receiver\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.293811 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ec328f6-fba4-4377-8827-998fbfb39c19-tmp\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.299467 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-sa-token\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.304693 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnsx5\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-kube-api-access-bnsx5\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.367979 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.378367 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485598 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config-openshift-service-cacrt\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485639 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-entrypoint\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485680 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5ec328f6-fba4-4377-8827-998fbfb39c19-datadir\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485769 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-trusted-ca\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485796 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-token\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485815 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-sa-token\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485844 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnsx5\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-kube-api-access-bnsx5\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485862 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ec328f6-fba4-4377-8827-998fbfb39c19-tmp\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485880 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-syslog-receiver\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.485895 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.486156 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.486348 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config" (OuterVolumeSpecName: "config") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.486503 4970 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.486519 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.486647 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ec328f6-fba4-4377-8827-998fbfb39c19-datadir" (OuterVolumeSpecName: "datadir") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.487039 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.487058 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.490432 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ec328f6-fba4-4377-8827-998fbfb39c19-tmp" (OuterVolumeSpecName: "tmp") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.490516 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.490572 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-sa-token" (OuterVolumeSpecName: "sa-token") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.490931 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-kube-api-access-bnsx5" (OuterVolumeSpecName: "kube-api-access-bnsx5") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "kube-api-access-bnsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.493404 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-token" (OuterVolumeSpecName: "collector-token") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588161 4970 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-entrypoint\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588193 4970 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/5ec328f6-fba4-4377-8827-998fbfb39c19-datadir\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588201 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5ec328f6-fba4-4377-8827-998fbfb39c19-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588209 4970 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588218 4970 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588227 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnsx5\" (UniqueName: \"kubernetes.io/projected/5ec328f6-fba4-4377-8827-998fbfb39c19-kube-api-access-bnsx5\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588235 4970 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5ec328f6-fba4-4377-8827-998fbfb39c19-tmp\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.588257 4970 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.791591 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.795453 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics\") pod \"collector-mqjqf\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " pod="openshift-logging/collector-mqjqf" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.892492 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics\") pod \"5ec328f6-fba4-4377-8827-998fbfb39c19\" (UID: \"5ec328f6-fba4-4377-8827-998fbfb39c19\") " Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.895150 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics" (OuterVolumeSpecName: "metrics") pod "5ec328f6-fba4-4377-8827-998fbfb39c19" (UID: "5ec328f6-fba4-4377-8827-998fbfb39c19"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:20:34 crc kubenswrapper[4970]: I1209 12:20:34.994170 4970 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/5ec328f6-fba4-4377-8827-998fbfb39c19-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.374648 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-mqjqf" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.431334 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-mqjqf"] Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.439763 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-mqjqf"] Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.450665 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-42tlr"] Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.451936 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.455419 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-42tlr"] Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.455761 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-q5hl5" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.456418 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.458682 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.458921 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.459151 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.466580 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501643 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-entrypoint\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501683 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-config\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501709 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-config-openshift-service-cacrt\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501729 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-tmp\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501752 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-trusted-ca\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501880 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-datadir\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501924 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-collector-token\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.501982 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mx8g\" (UniqueName: \"kubernetes.io/projected/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-kube-api-access-7mx8g\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.502055 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-sa-token\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.502105 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-collector-syslog-receiver\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.502135 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-metrics\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603035 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mx8g\" (UniqueName: \"kubernetes.io/projected/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-kube-api-access-7mx8g\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-sa-token\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603164 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-collector-syslog-receiver\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603202 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-metrics\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603296 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-entrypoint\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603327 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-config\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603370 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-config-openshift-service-cacrt\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603398 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-tmp\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.603427 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-trusted-ca\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604538 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-config-openshift-service-cacrt\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604772 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-datadir\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604969 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-collector-token\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604877 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-datadir\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604899 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-entrypoint\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604986 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-trusted-ca\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.604826 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-config\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.619650 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-tmp\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.619841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-metrics\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.619985 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-collector-token\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.620067 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-collector-syslog-receiver\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.624093 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mx8g\" (UniqueName: \"kubernetes.io/projected/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-kube-api-access-7mx8g\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.624912 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/f7587e06-2e8b-4f39-a00d-edcda46d5cdf-sa-token\") pod \"collector-42tlr\" (UID: \"f7587e06-2e8b-4f39-a00d-edcda46d5cdf\") " pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.773331 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-42tlr" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.785413 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.785541 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.823156 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec328f6-fba4-4377-8827-998fbfb39c19" path="/var/lib/kubelet/pods/5ec328f6-fba4-4377-8827-998fbfb39c19/volumes" Dec 09 12:20:35 crc kubenswrapper[4970]: I1209 12:20:35.835757 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:36 crc kubenswrapper[4970]: I1209 12:20:36.239672 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-42tlr"] Dec 09 12:20:36 crc kubenswrapper[4970]: I1209 12:20:36.381943 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-42tlr" event={"ID":"f7587e06-2e8b-4f39-a00d-edcda46d5cdf","Type":"ContainerStarted","Data":"c89735105ec437093a403dc2434d9af7c8615bc7f3a9c59e3188b7411a539ed0"} Dec 09 12:20:36 crc kubenswrapper[4970]: I1209 12:20:36.423232 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:36 crc kubenswrapper[4970]: I1209 12:20:36.474673 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjnfx"] Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.393985 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mjnfx" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="registry-server" containerID="cri-o://2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b" gracePeriod=2 Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.482380 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l44p8"] Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.483845 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.499147 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l44p8"] Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.563436 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-utilities\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.563772 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvkc9\" (UniqueName: \"kubernetes.io/projected/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-kube-api-access-gvkc9\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.563813 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-catalog-content\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.665299 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-catalog-content\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.665424 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-utilities\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.665444 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvkc9\" (UniqueName: \"kubernetes.io/projected/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-kube-api-access-gvkc9\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.666702 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-catalog-content\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.666943 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-utilities\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.688064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvkc9\" (UniqueName: \"kubernetes.io/projected/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-kube-api-access-gvkc9\") pod \"certified-operators-l44p8\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.803416 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.819805 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.868891 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-utilities\") pod \"ab2f3090-38c2-46df-a717-503643a85a34\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.868966 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-catalog-content\") pod \"ab2f3090-38c2-46df-a717-503643a85a34\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.869052 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwtq6\" (UniqueName: \"kubernetes.io/projected/ab2f3090-38c2-46df-a717-503643a85a34-kube-api-access-cwtq6\") pod \"ab2f3090-38c2-46df-a717-503643a85a34\" (UID: \"ab2f3090-38c2-46df-a717-503643a85a34\") " Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.870052 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-utilities" (OuterVolumeSpecName: "utilities") pod "ab2f3090-38c2-46df-a717-503643a85a34" (UID: "ab2f3090-38c2-46df-a717-503643a85a34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.872558 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2f3090-38c2-46df-a717-503643a85a34-kube-api-access-cwtq6" (OuterVolumeSpecName: "kube-api-access-cwtq6") pod "ab2f3090-38c2-46df-a717-503643a85a34" (UID: "ab2f3090-38c2-46df-a717-503643a85a34"). InnerVolumeSpecName "kube-api-access-cwtq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.901093 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab2f3090-38c2-46df-a717-503643a85a34" (UID: "ab2f3090-38c2-46df-a717-503643a85a34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.971275 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwtq6\" (UniqueName: \"kubernetes.io/projected/ab2f3090-38c2-46df-a717-503643a85a34-kube-api-access-cwtq6\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.971320 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:38 crc kubenswrapper[4970]: I1209 12:20:38.971333 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab2f3090-38c2-46df-a717-503643a85a34-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.555628 4970 generic.go:334] "Generic (PLEG): container finished" podID="ab2f3090-38c2-46df-a717-503643a85a34" containerID="2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b" exitCode=0 Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.555777 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjnfx" event={"ID":"ab2f3090-38c2-46df-a717-503643a85a34","Type":"ContainerDied","Data":"2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b"} Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.555884 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjnfx" event={"ID":"ab2f3090-38c2-46df-a717-503643a85a34","Type":"ContainerDied","Data":"43b866529239c65a0e001f3aa87aba566f2c24f237924446497b0be9998d3211"} Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.555900 4970 scope.go:117] "RemoveContainer" containerID="2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.555847 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjnfx" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.587886 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjnfx"] Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.595886 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjnfx"] Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.608503 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l44p8"] Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.610763 4970 scope.go:117] "RemoveContainer" containerID="6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.677440 4970 scope.go:117] "RemoveContainer" containerID="8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.744233 4970 scope.go:117] "RemoveContainer" containerID="2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b" Dec 09 12:20:39 crc kubenswrapper[4970]: E1209 12:20:39.745348 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b\": container with ID starting with 2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b not found: ID does not exist" containerID="2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.745379 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b"} err="failed to get container status \"2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b\": rpc error: code = NotFound desc = could not find container \"2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b\": container with ID starting with 2211513a075c081eb6b53b802787a2967ecacf2f376d7c066eeab5929e9abe3b not found: ID does not exist" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.745400 4970 scope.go:117] "RemoveContainer" containerID="6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c" Dec 09 12:20:39 crc kubenswrapper[4970]: E1209 12:20:39.745765 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c\": container with ID starting with 6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c not found: ID does not exist" containerID="6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.745809 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c"} err="failed to get container status \"6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c\": rpc error: code = NotFound desc = could not find container \"6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c\": container with ID starting with 6b75783b14f955bd15fe3453e8fa8841721b8d4df59621d11a6189fbef57f27c not found: ID does not exist" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.745841 4970 scope.go:117] "RemoveContainer" containerID="8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c" Dec 09 12:20:39 crc kubenswrapper[4970]: E1209 12:20:39.746323 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c\": container with ID starting with 8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c not found: ID does not exist" containerID="8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.746355 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c"} err="failed to get container status \"8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c\": rpc error: code = NotFound desc = could not find container \"8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c\": container with ID starting with 8876559a07c2d0eb57b7e9722b2b45a3c07943a190559b66c83e0ebae9f53e7c not found: ID does not exist" Dec 09 12:20:39 crc kubenswrapper[4970]: I1209 12:20:39.822341 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2f3090-38c2-46df-a717-503643a85a34" path="/var/lib/kubelet/pods/ab2f3090-38c2-46df-a717-503643a85a34/volumes" Dec 09 12:20:40 crc kubenswrapper[4970]: I1209 12:20:40.568526 4970 generic.go:334] "Generic (PLEG): container finished" podID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerID="7b623982db0b34e8d14a65a84eb51900cb83ca0199ecaa69c12c77f0d63255cf" exitCode=0 Dec 09 12:20:40 crc kubenswrapper[4970]: I1209 12:20:40.568584 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l44p8" event={"ID":"279ab0b4-f5a6-4150-ab8b-1deb21504a0a","Type":"ContainerDied","Data":"7b623982db0b34e8d14a65a84eb51900cb83ca0199ecaa69c12c77f0d63255cf"} Dec 09 12:20:40 crc kubenswrapper[4970]: I1209 12:20:40.568644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l44p8" event={"ID":"279ab0b4-f5a6-4150-ab8b-1deb21504a0a","Type":"ContainerStarted","Data":"8fff8d8cd43a1a1a5e78d99727d56b8e3033e6d094e3b57b4fa84708b95ba7bc"} Dec 09 12:20:45 crc kubenswrapper[4970]: I1209 12:20:45.612626 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-42tlr" event={"ID":"f7587e06-2e8b-4f39-a00d-edcda46d5cdf","Type":"ContainerStarted","Data":"b761ea14864ea64c899901afc2e2214fc67a58a9bd79e65413371d0d7430a4f0"} Dec 09 12:20:45 crc kubenswrapper[4970]: I1209 12:20:45.615497 4970 generic.go:334] "Generic (PLEG): container finished" podID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerID="7bbb33d8d4434bb6d4178c8385e98b17a014b1393dbd3885df30de672544a6fe" exitCode=0 Dec 09 12:20:45 crc kubenswrapper[4970]: I1209 12:20:45.615545 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l44p8" event={"ID":"279ab0b4-f5a6-4150-ab8b-1deb21504a0a","Type":"ContainerDied","Data":"7bbb33d8d4434bb6d4178c8385e98b17a014b1393dbd3885df30de672544a6fe"} Dec 09 12:20:45 crc kubenswrapper[4970]: I1209 12:20:45.640701 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-42tlr" podStartSLOduration=2.11860743 podStartE2EDuration="10.640681336s" podCreationTimestamp="2025-12-09 12:20:35 +0000 UTC" firstStartedPulling="2025-12-09 12:20:36.250848342 +0000 UTC m=+848.811329393" lastFinishedPulling="2025-12-09 12:20:44.772922248 +0000 UTC m=+857.333403299" observedRunningTime="2025-12-09 12:20:45.630541185 +0000 UTC m=+858.191022236" watchObservedRunningTime="2025-12-09 12:20:45.640681336 +0000 UTC m=+858.201162387" Dec 09 12:20:46 crc kubenswrapper[4970]: I1209 12:20:46.623861 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l44p8" event={"ID":"279ab0b4-f5a6-4150-ab8b-1deb21504a0a","Type":"ContainerStarted","Data":"f93a78e6568eee3f914a240551f2cf0e16d2fe1445b9400fbe9d678a8e9bc18c"} Dec 09 12:20:46 crc kubenswrapper[4970]: I1209 12:20:46.643531 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l44p8" podStartSLOduration=3.101604572 podStartE2EDuration="8.643513779s" podCreationTimestamp="2025-12-09 12:20:38 +0000 UTC" firstStartedPulling="2025-12-09 12:20:40.570830037 +0000 UTC m=+853.131311088" lastFinishedPulling="2025-12-09 12:20:46.112739194 +0000 UTC m=+858.673220295" observedRunningTime="2025-12-09 12:20:46.63982551 +0000 UTC m=+859.200306571" watchObservedRunningTime="2025-12-09 12:20:46.643513779 +0000 UTC m=+859.203994830" Dec 09 12:20:48 crc kubenswrapper[4970]: I1209 12:20:48.804444 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:48 crc kubenswrapper[4970]: I1209 12:20:48.804845 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:48 crc kubenswrapper[4970]: I1209 12:20:48.843515 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.308719 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 12:20:53 crc kubenswrapper[4970]: E1209 12:20:53.309638 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="extract-utilities" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.309656 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="extract-utilities" Dec 09 12:20:53 crc kubenswrapper[4970]: E1209 12:20:53.309670 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="registry-server" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.309677 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="registry-server" Dec 09 12:20:53 crc kubenswrapper[4970]: E1209 12:20:53.309708 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="extract-content" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.309715 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="extract-content" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.309868 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2f3090-38c2-46df-a717-503643a85a34" containerName="registry-server" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.311129 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.325607 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.407952 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-catalog-content\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.408005 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmvmk\" (UniqueName: \"kubernetes.io/projected/30d540ae-51c2-421b-88e8-09f8ab24af89-kube-api-access-xmvmk\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.408222 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-utilities\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.509942 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-catalog-content\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.510220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmvmk\" (UniqueName: \"kubernetes.io/projected/30d540ae-51c2-421b-88e8-09f8ab24af89-kube-api-access-xmvmk\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.510412 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-utilities\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.510787 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-catalog-content\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.510796 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-utilities\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.529958 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmvmk\" (UniqueName: \"kubernetes.io/projected/30d540ae-51c2-421b-88e8-09f8ab24af89-kube-api-access-xmvmk\") pod \"redhat-operators-2h8bq\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:53 crc kubenswrapper[4970]: I1209 12:20:53.632305 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:20:54 crc kubenswrapper[4970]: I1209 12:20:54.170791 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 12:20:54 crc kubenswrapper[4970]: I1209 12:20:54.682368 4970 generic.go:334] "Generic (PLEG): container finished" podID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerID="13db9125f852997306965a7059406547cc9437b8da084a2b13fc785eae0423c8" exitCode=0 Dec 09 12:20:54 crc kubenswrapper[4970]: I1209 12:20:54.682607 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerDied","Data":"13db9125f852997306965a7059406547cc9437b8da084a2b13fc785eae0423c8"} Dec 09 12:20:54 crc kubenswrapper[4970]: I1209 12:20:54.682637 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerStarted","Data":"845daaf3b8dc4b7b9ad4ca1f5b6d860430911123b4557729a739d9f14f78b4a5"} Dec 09 12:20:58 crc kubenswrapper[4970]: I1209 12:20:58.860113 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:20:58 crc kubenswrapper[4970]: I1209 12:20:58.911265 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l44p8"] Dec 09 12:20:59 crc kubenswrapper[4970]: I1209 12:20:59.724748 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l44p8" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="registry-server" containerID="cri-o://f93a78e6568eee3f914a240551f2cf0e16d2fe1445b9400fbe9d678a8e9bc18c" gracePeriod=2 Dec 09 12:21:00 crc kubenswrapper[4970]: I1209 12:21:00.734654 4970 generic.go:334] "Generic (PLEG): container finished" podID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerID="f93a78e6568eee3f914a240551f2cf0e16d2fe1445b9400fbe9d678a8e9bc18c" exitCode=0 Dec 09 12:21:00 crc kubenswrapper[4970]: I1209 12:21:00.734860 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l44p8" event={"ID":"279ab0b4-f5a6-4150-ab8b-1deb21504a0a","Type":"ContainerDied","Data":"f93a78e6568eee3f914a240551f2cf0e16d2fe1445b9400fbe9d678a8e9bc18c"} Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.420695 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.571725 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-catalog-content\") pod \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.571915 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvkc9\" (UniqueName: \"kubernetes.io/projected/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-kube-api-access-gvkc9\") pod \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.571986 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-utilities\") pod \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\" (UID: \"279ab0b4-f5a6-4150-ab8b-1deb21504a0a\") " Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.573119 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-utilities" (OuterVolumeSpecName: "utilities") pod "279ab0b4-f5a6-4150-ab8b-1deb21504a0a" (UID: "279ab0b4-f5a6-4150-ab8b-1deb21504a0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.577310 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-kube-api-access-gvkc9" (OuterVolumeSpecName: "kube-api-access-gvkc9") pod "279ab0b4-f5a6-4150-ab8b-1deb21504a0a" (UID: "279ab0b4-f5a6-4150-ab8b-1deb21504a0a"). InnerVolumeSpecName "kube-api-access-gvkc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.623872 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "279ab0b4-f5a6-4150-ab8b-1deb21504a0a" (UID: "279ab0b4-f5a6-4150-ab8b-1deb21504a0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.673997 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvkc9\" (UniqueName: \"kubernetes.io/projected/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-kube-api-access-gvkc9\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.674043 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.674057 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/279ab0b4-f5a6-4150-ab8b-1deb21504a0a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.779559 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l44p8" event={"ID":"279ab0b4-f5a6-4150-ab8b-1deb21504a0a","Type":"ContainerDied","Data":"8fff8d8cd43a1a1a5e78d99727d56b8e3033e6d094e3b57b4fa84708b95ba7bc"} Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.779637 4970 scope.go:117] "RemoveContainer" containerID="f93a78e6568eee3f914a240551f2cf0e16d2fe1445b9400fbe9d678a8e9bc18c" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.779754 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l44p8" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.808282 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l44p8"] Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.821507 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l44p8"] Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.847098 4970 scope.go:117] "RemoveContainer" containerID="7bbb33d8d4434bb6d4178c8385e98b17a014b1393dbd3885df30de672544a6fe" Dec 09 12:21:05 crc kubenswrapper[4970]: I1209 12:21:05.865052 4970 scope.go:117] "RemoveContainer" containerID="7b623982db0b34e8d14a65a84eb51900cb83ca0199ecaa69c12c77f0d63255cf" Dec 09 12:21:06 crc kubenswrapper[4970]: I1209 12:21:06.788353 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerStarted","Data":"57e75a7cbc884a37ae6902813b9b022eeec9ee1e03fe1c40974eee1bce417094"} Dec 09 12:21:07 crc kubenswrapper[4970]: I1209 12:21:07.798811 4970 generic.go:334] "Generic (PLEG): container finished" podID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerID="57e75a7cbc884a37ae6902813b9b022eeec9ee1e03fe1c40974eee1bce417094" exitCode=0 Dec 09 12:21:07 crc kubenswrapper[4970]: I1209 12:21:07.798915 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerDied","Data":"57e75a7cbc884a37ae6902813b9b022eeec9ee1e03fe1c40974eee1bce417094"} Dec 09 12:21:07 crc kubenswrapper[4970]: I1209 12:21:07.820192 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" path="/var/lib/kubelet/pods/279ab0b4-f5a6-4150-ab8b-1deb21504a0a/volumes" Dec 09 12:21:09 crc kubenswrapper[4970]: I1209 12:21:09.820215 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerStarted","Data":"cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e"} Dec 09 12:21:09 crc kubenswrapper[4970]: I1209 12:21:09.839585 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2h8bq" podStartSLOduration=2.884637523 podStartE2EDuration="16.839565676s" podCreationTimestamp="2025-12-09 12:20:53 +0000 UTC" firstStartedPulling="2025-12-09 12:20:54.684010153 +0000 UTC m=+867.244491214" lastFinishedPulling="2025-12-09 12:21:08.638938316 +0000 UTC m=+881.199419367" observedRunningTime="2025-12-09 12:21:09.836461983 +0000 UTC m=+882.396943044" watchObservedRunningTime="2025-12-09 12:21:09.839565676 +0000 UTC m=+882.400046727" Dec 09 12:21:13 crc kubenswrapper[4970]: I1209 12:21:13.632715 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:21:13 crc kubenswrapper[4970]: I1209 12:21:13.633289 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.374981 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns"] Dec 09 12:21:14 crc kubenswrapper[4970]: E1209 12:21:14.375322 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="registry-server" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.375338 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="registry-server" Dec 09 12:21:14 crc kubenswrapper[4970]: E1209 12:21:14.375359 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="extract-utilities" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.375368 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="extract-utilities" Dec 09 12:21:14 crc kubenswrapper[4970]: E1209 12:21:14.375383 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="extract-content" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.375391 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="extract-content" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.375534 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="279ab0b4-f5a6-4150-ab8b-1deb21504a0a" containerName="registry-server" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.376528 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.378934 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.386106 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns"] Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.454590 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.454661 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.454686 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-kube-api-access-f7kpg\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.555807 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.556214 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-kube-api-access-f7kpg\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.556375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.556432 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.556818 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.573296 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-kube-api-access-f7kpg\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.698767 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:14 crc kubenswrapper[4970]: I1209 12:21:14.702807 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2h8bq" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="registry-server" probeResult="failure" output=< Dec 09 12:21:14 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:21:14 crc kubenswrapper[4970]: > Dec 09 12:21:15 crc kubenswrapper[4970]: I1209 12:21:15.313538 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns"] Dec 09 12:21:15 crc kubenswrapper[4970]: I1209 12:21:15.865297 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" event={"ID":"a193be7a-f0ec-4f93-a8f8-37dc88376a5e","Type":"ContainerStarted","Data":"a4874366f925188249f8c7e18cc322c85ce62afc4f330340c04fec37b01bbbd9"} Dec 09 12:21:17 crc kubenswrapper[4970]: I1209 12:21:17.889311 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" event={"ID":"a193be7a-f0ec-4f93-a8f8-37dc88376a5e","Type":"ContainerStarted","Data":"2c051746ea2078fb3323e80aa7f6148144ee54a499e84c550804be91ac24be25"} Dec 09 12:21:19 crc kubenswrapper[4970]: I1209 12:21:19.902917 4970 generic.go:334] "Generic (PLEG): container finished" podID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerID="2c051746ea2078fb3323e80aa7f6148144ee54a499e84c550804be91ac24be25" exitCode=0 Dec 09 12:21:19 crc kubenswrapper[4970]: I1209 12:21:19.902973 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" event={"ID":"a193be7a-f0ec-4f93-a8f8-37dc88376a5e","Type":"ContainerDied","Data":"2c051746ea2078fb3323e80aa7f6148144ee54a499e84c550804be91ac24be25"} Dec 09 12:21:23 crc kubenswrapper[4970]: I1209 12:21:23.676358 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:21:23 crc kubenswrapper[4970]: I1209 12:21:23.718079 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 12:21:23 crc kubenswrapper[4970]: I1209 12:21:23.931280 4970 generic.go:334] "Generic (PLEG): container finished" podID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerID="62f6c272f0f806fab191801a7b493e647c4ad158744002e1798c292761355f43" exitCode=0 Dec 09 12:21:23 crc kubenswrapper[4970]: I1209 12:21:23.931380 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" event={"ID":"a193be7a-f0ec-4f93-a8f8-37dc88376a5e","Type":"ContainerDied","Data":"62f6c272f0f806fab191801a7b493e647c4ad158744002e1798c292761355f43"} Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.336783 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.513368 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ctsvn"] Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.513671 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ctsvn" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="registry-server" containerID="cri-o://29f229ab4e6af488406c03b33968662986759f2b77dc29446d3acc4b0fb4cb4b" gracePeriod=2 Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.941287 4970 generic.go:334] "Generic (PLEG): container finished" podID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerID="995c1a0f118e37e0418affc58643ea6eec2b618a548444db8f68dcde98942918" exitCode=0 Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.941362 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" event={"ID":"a193be7a-f0ec-4f93-a8f8-37dc88376a5e","Type":"ContainerDied","Data":"995c1a0f118e37e0418affc58643ea6eec2b618a548444db8f68dcde98942918"} Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.945499 4970 generic.go:334] "Generic (PLEG): container finished" podID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerID="29f229ab4e6af488406c03b33968662986759f2b77dc29446d3acc4b0fb4cb4b" exitCode=0 Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.945665 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerDied","Data":"29f229ab4e6af488406c03b33968662986759f2b77dc29446d3acc4b0fb4cb4b"} Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.945715 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsvn" event={"ID":"0ab6bbc4-e295-4f79-b8b1-5151d511d302","Type":"ContainerDied","Data":"54debbc02c07d2b64468da86ad623fa07baf4ffdd69a793148640178ff56d3b3"} Dec 09 12:21:24 crc kubenswrapper[4970]: I1209 12:21:24.945728 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54debbc02c07d2b64468da86ad623fa07baf4ffdd69a793148640178ff56d3b3" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.001571 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.127408 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-catalog-content\") pod \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.127762 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dtlv\" (UniqueName: \"kubernetes.io/projected/0ab6bbc4-e295-4f79-b8b1-5151d511d302-kube-api-access-8dtlv\") pod \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.127887 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-utilities\") pod \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\" (UID: \"0ab6bbc4-e295-4f79-b8b1-5151d511d302\") " Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.128977 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-utilities" (OuterVolumeSpecName: "utilities") pod "0ab6bbc4-e295-4f79-b8b1-5151d511d302" (UID: "0ab6bbc4-e295-4f79-b8b1-5151d511d302"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.134210 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab6bbc4-e295-4f79-b8b1-5151d511d302-kube-api-access-8dtlv" (OuterVolumeSpecName: "kube-api-access-8dtlv") pod "0ab6bbc4-e295-4f79-b8b1-5151d511d302" (UID: "0ab6bbc4-e295-4f79-b8b1-5151d511d302"). InnerVolumeSpecName "kube-api-access-8dtlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.230189 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dtlv\" (UniqueName: \"kubernetes.io/projected/0ab6bbc4-e295-4f79-b8b1-5151d511d302-kube-api-access-8dtlv\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.230227 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.236861 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ab6bbc4-e295-4f79-b8b1-5151d511d302" (UID: "0ab6bbc4-e295-4f79-b8b1-5151d511d302"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.331670 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab6bbc4-e295-4f79-b8b1-5151d511d302-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.952534 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsvn" Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.978167 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ctsvn"] Dec 09 12:21:25 crc kubenswrapper[4970]: I1209 12:21:25.987148 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ctsvn"] Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.247869 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.353368 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-kube-api-access-f7kpg\") pod \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.353439 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-util\") pod \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.353651 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-bundle\") pod \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\" (UID: \"a193be7a-f0ec-4f93-a8f8-37dc88376a5e\") " Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.354839 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-bundle" (OuterVolumeSpecName: "bundle") pod "a193be7a-f0ec-4f93-a8f8-37dc88376a5e" (UID: "a193be7a-f0ec-4f93-a8f8-37dc88376a5e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.360510 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-kube-api-access-f7kpg" (OuterVolumeSpecName: "kube-api-access-f7kpg") pod "a193be7a-f0ec-4f93-a8f8-37dc88376a5e" (UID: "a193be7a-f0ec-4f93-a8f8-37dc88376a5e"). InnerVolumeSpecName "kube-api-access-f7kpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.366167 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-util" (OuterVolumeSpecName: "util") pod "a193be7a-f0ec-4f93-a8f8-37dc88376a5e" (UID: "a193be7a-f0ec-4f93-a8f8-37dc88376a5e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.455409 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.455449 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7kpg\" (UniqueName: \"kubernetes.io/projected/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-kube-api-access-f7kpg\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.455461 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a193be7a-f0ec-4f93-a8f8-37dc88376a5e-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.961376 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" event={"ID":"a193be7a-f0ec-4f93-a8f8-37dc88376a5e","Type":"ContainerDied","Data":"a4874366f925188249f8c7e18cc322c85ce62afc4f330340c04fec37b01bbbd9"} Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.961718 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4874366f925188249f8c7e18cc322c85ce62afc4f330340c04fec37b01bbbd9" Dec 09 12:21:26 crc kubenswrapper[4970]: I1209 12:21:26.961425 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns" Dec 09 12:21:27 crc kubenswrapper[4970]: I1209 12:21:27.833078 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" path="/var/lib/kubelet/pods/0ab6bbc4-e295-4f79-b8b1-5151d511d302/volumes" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.301621 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9"] Dec 09 12:21:31 crc kubenswrapper[4970]: E1209 12:21:31.302182 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="extract-utilities" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302195 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="extract-utilities" Dec 09 12:21:31 crc kubenswrapper[4970]: E1209 12:21:31.302206 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="pull" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302212 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="pull" Dec 09 12:21:31 crc kubenswrapper[4970]: E1209 12:21:31.302222 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="extract-content" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302230 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="extract-content" Dec 09 12:21:31 crc kubenswrapper[4970]: E1209 12:21:31.302277 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="extract" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302285 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="extract" Dec 09 12:21:31 crc kubenswrapper[4970]: E1209 12:21:31.302295 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="util" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302300 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="util" Dec 09 12:21:31 crc kubenswrapper[4970]: E1209 12:21:31.302307 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="registry-server" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302313 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="registry-server" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302420 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab6bbc4-e295-4f79-b8b1-5151d511d302" containerName="registry-server" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302434 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a193be7a-f0ec-4f93-a8f8-37dc88376a5e" containerName="extract" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.302906 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.304937 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.305957 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.306071 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-sgzjw" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.325169 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9"] Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.334377 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxqk\" (UniqueName: \"kubernetes.io/projected/9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2-kube-api-access-tmxqk\") pod \"nmstate-operator-5b5b58f5c8-xjkb9\" (UID: \"9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.436282 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmxqk\" (UniqueName: \"kubernetes.io/projected/9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2-kube-api-access-tmxqk\") pod \"nmstate-operator-5b5b58f5c8-xjkb9\" (UID: \"9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.470342 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmxqk\" (UniqueName: \"kubernetes.io/projected/9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2-kube-api-access-tmxqk\") pod \"nmstate-operator-5b5b58f5c8-xjkb9\" (UID: \"9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" Dec 09 12:21:31 crc kubenswrapper[4970]: I1209 12:21:31.620721 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" Dec 09 12:21:32 crc kubenswrapper[4970]: I1209 12:21:32.084488 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9"] Dec 09 12:21:32 crc kubenswrapper[4970]: I1209 12:21:32.376966 4970 scope.go:117] "RemoveContainer" containerID="29f229ab4e6af488406c03b33968662986759f2b77dc29446d3acc4b0fb4cb4b" Dec 09 12:21:32 crc kubenswrapper[4970]: I1209 12:21:32.396968 4970 scope.go:117] "RemoveContainer" containerID="bbdc27be80790bf191d98488876249f84e7a3f2e092bd5ce02a8391d5f0a7126" Dec 09 12:21:32 crc kubenswrapper[4970]: I1209 12:21:32.417254 4970 scope.go:117] "RemoveContainer" containerID="c2e2c7599a5a2db423fd6ad7c8aac977a0bc76acd7bccd621cfd11890a57af96" Dec 09 12:21:33 crc kubenswrapper[4970]: I1209 12:21:33.005965 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" event={"ID":"9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2","Type":"ContainerStarted","Data":"20337970d9878f2d4b19fd56b6ee96fa5045b8598f91188d4ba5034b19bc73e4"} Dec 09 12:21:35 crc kubenswrapper[4970]: I1209 12:21:35.028594 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" event={"ID":"9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2","Type":"ContainerStarted","Data":"c27e55203a21e8e9cde7e1bce118abe0bf81d1159f77fb3e5a93e041724c25b5"} Dec 09 12:21:35 crc kubenswrapper[4970]: I1209 12:21:35.047645 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-xjkb9" podStartSLOduration=1.73193501 podStartE2EDuration="4.047624418s" podCreationTimestamp="2025-12-09 12:21:31 +0000 UTC" firstStartedPulling="2025-12-09 12:21:32.096325498 +0000 UTC m=+904.656806559" lastFinishedPulling="2025-12-09 12:21:34.412014906 +0000 UTC m=+906.972495967" observedRunningTime="2025-12-09 12:21:35.042974424 +0000 UTC m=+907.603455485" watchObservedRunningTime="2025-12-09 12:21:35.047624418 +0000 UTC m=+907.608105469" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.068380 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.070316 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.074432 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.075208 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-x8l4c" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.076656 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.079119 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.082384 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.103938 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-zddxq"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.104779 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.106165 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzh4k\" (UniqueName: \"kubernetes.io/projected/74bf2616-c17d-4a12-89d4-416af30ff01a-kube-api-access-xzh4k\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.106273 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzlp9\" (UniqueName: \"kubernetes.io/projected/4cbc766b-5faa-43b1-ab55-5d25e23ee20e-kube-api-access-fzlp9\") pod \"nmstate-metrics-7f946cbc9-jwgml\" (UID: \"4cbc766b-5faa-43b1-ab55-5d25e23ee20e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.106332 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/74bf2616-c17d-4a12-89d4-416af30ff01a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.133888 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208101 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/74bf2616-c17d-4a12-89d4-416af30ff01a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208152 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-dbus-socket\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208195 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwqc\" (UniqueName: \"kubernetes.io/projected/364b73a8-4e46-496e-a629-2a9ec738c9ba-kube-api-access-llwqc\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208264 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzh4k\" (UniqueName: \"kubernetes.io/projected/74bf2616-c17d-4a12-89d4-416af30ff01a-kube-api-access-xzh4k\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208301 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-nmstate-lock\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208349 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-ovs-socket\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.208374 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzlp9\" (UniqueName: \"kubernetes.io/projected/4cbc766b-5faa-43b1-ab55-5d25e23ee20e-kube-api-access-fzlp9\") pod \"nmstate-metrics-7f946cbc9-jwgml\" (UID: \"4cbc766b-5faa-43b1-ab55-5d25e23ee20e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" Dec 09 12:21:36 crc kubenswrapper[4970]: E1209 12:21:36.208733 4970 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 09 12:21:36 crc kubenswrapper[4970]: E1209 12:21:36.208779 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bf2616-c17d-4a12-89d4-416af30ff01a-tls-key-pair podName:74bf2616-c17d-4a12-89d4-416af30ff01a nodeName:}" failed. No retries permitted until 2025-12-09 12:21:36.708765216 +0000 UTC m=+909.269246267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/74bf2616-c17d-4a12-89d4-416af30ff01a-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-pscqd" (UID: "74bf2616-c17d-4a12-89d4-416af30ff01a") : secret "openshift-nmstate-webhook" not found Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.236833 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzh4k\" (UniqueName: \"kubernetes.io/projected/74bf2616-c17d-4a12-89d4-416af30ff01a-kube-api-access-xzh4k\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.246147 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzlp9\" (UniqueName: \"kubernetes.io/projected/4cbc766b-5faa-43b1-ab55-5d25e23ee20e-kube-api-access-fzlp9\") pod \"nmstate-metrics-7f946cbc9-jwgml\" (UID: \"4cbc766b-5faa-43b1-ab55-5d25e23ee20e\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310281 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-nmstate-lock\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310346 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-ovs-socket\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310384 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-nmstate-lock\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310410 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-dbus-socket\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310463 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwqc\" (UniqueName: \"kubernetes.io/projected/364b73a8-4e46-496e-a629-2a9ec738c9ba-kube-api-access-llwqc\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310523 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-ovs-socket\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.310707 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/364b73a8-4e46-496e-a629-2a9ec738c9ba-dbus-socket\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.335838 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwqc\" (UniqueName: \"kubernetes.io/projected/364b73a8-4e46-496e-a629-2a9ec738c9ba-kube-api-access-llwqc\") pod \"nmstate-handler-zddxq\" (UID: \"364b73a8-4e46-496e-a629-2a9ec738c9ba\") " pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.393106 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.394228 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.397277 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.399291 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-ngvzb" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.399375 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.412978 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b85a505e-070c-425e-a78b-af7d0aed981f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.413139 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a505e-070c-425e-a78b-af7d0aed981f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.413403 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbnw9\" (UniqueName: \"kubernetes.io/projected/b85a505e-070c-425e-a78b-af7d0aed981f-kube-api-access-rbnw9\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.431089 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.431234 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.442894 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.521818 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbnw9\" (UniqueName: \"kubernetes.io/projected/b85a505e-070c-425e-a78b-af7d0aed981f-kube-api-access-rbnw9\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.522216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b85a505e-070c-425e-a78b-af7d0aed981f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.522313 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a505e-070c-425e-a78b-af7d0aed981f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: E1209 12:21:36.522526 4970 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 09 12:21:36 crc kubenswrapper[4970]: E1209 12:21:36.522605 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b85a505e-070c-425e-a78b-af7d0aed981f-plugin-serving-cert podName:b85a505e-070c-425e-a78b-af7d0aed981f nodeName:}" failed. No retries permitted until 2025-12-09 12:21:37.022583601 +0000 UTC m=+909.583064652 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b85a505e-070c-425e-a78b-af7d0aed981f-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-zflr5" (UID: "b85a505e-070c-425e-a78b-af7d0aed981f") : secret "plugin-serving-cert" not found Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.524203 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b85a505e-070c-425e-a78b-af7d0aed981f-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.596952 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbnw9\" (UniqueName: \"kubernetes.io/projected/b85a505e-070c-425e-a78b-af7d0aed981f-kube-api-access-rbnw9\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.633810 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-598d8f6f5c-75bbn"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.636210 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.672322 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-598d8f6f5c-75bbn"] Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724762 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/74bf2616-c17d-4a12-89d4-416af30ff01a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724817 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-trusted-ca-bundle\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724839 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4mjg\" (UniqueName: \"kubernetes.io/projected/a7013e9d-6527-41d3-a311-2f21b4961dfa-kube-api-access-z4mjg\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724861 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-oauth-config\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724887 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-service-ca\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724908 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-serving-cert\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724934 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-config\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.724977 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-oauth-serving-cert\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.730753 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/74bf2616-c17d-4a12-89d4-416af30ff01a-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-pscqd\" (UID: \"74bf2616-c17d-4a12-89d4-416af30ff01a\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.826509 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-service-ca\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.826772 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-serving-cert\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.826815 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-config\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.826847 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-oauth-serving-cert\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.826982 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-trusted-ca-bundle\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.827005 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mjg\" (UniqueName: \"kubernetes.io/projected/a7013e9d-6527-41d3-a311-2f21b4961dfa-kube-api-access-z4mjg\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.827031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-oauth-config\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.827470 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-service-ca\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.827637 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-oauth-serving-cert\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.827650 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-config\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.828597 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-trusted-ca-bundle\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.830482 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-oauth-config\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.830619 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-serving-cert\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.842710 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4mjg\" (UniqueName: \"kubernetes.io/projected/a7013e9d-6527-41d3-a311-2f21b4961dfa-kube-api-access-z4mjg\") pod \"console-598d8f6f5c-75bbn\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.977954 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:36 crc kubenswrapper[4970]: I1209 12:21:36.997989 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml"] Dec 09 12:21:37 crc kubenswrapper[4970]: W1209 12:21:37.007005 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cbc766b_5faa_43b1_ab55_5d25e23ee20e.slice/crio-975fb0a14f94407179f5857bc29d2caf4201943c3b1533a33d2619b9ff6f8aa4 WatchSource:0}: Error finding container 975fb0a14f94407179f5857bc29d2caf4201943c3b1533a33d2619b9ff6f8aa4: Status 404 returned error can't find the container with id 975fb0a14f94407179f5857bc29d2caf4201943c3b1533a33d2619b9ff6f8aa4 Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.008503 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.029694 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a505e-070c-425e-a78b-af7d0aed981f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.033799 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b85a505e-070c-425e-a78b-af7d0aed981f-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zflr5\" (UID: \"b85a505e-070c-425e-a78b-af7d0aed981f\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.045485 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" event={"ID":"4cbc766b-5faa-43b1-ab55-5d25e23ee20e","Type":"ContainerStarted","Data":"975fb0a14f94407179f5857bc29d2caf4201943c3b1533a33d2619b9ff6f8aa4"} Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.046771 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zddxq" event={"ID":"364b73a8-4e46-496e-a629-2a9ec738c9ba","Type":"ContainerStarted","Data":"43ec3287b8f1429c9d592a9393e09c3015466d439b6af32e78463b05f9ee5ee4"} Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.310430 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.434791 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-598d8f6f5c-75bbn"] Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.495417 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd"] Dec 09 12:21:37 crc kubenswrapper[4970]: W1209 12:21:37.504826 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74bf2616_c17d_4a12_89d4_416af30ff01a.slice/crio-4f8822c93561fffd8dacd39d3164d77e4f10f3e6e1965f778cb080bd4b17b26a WatchSource:0}: Error finding container 4f8822c93561fffd8dacd39d3164d77e4f10f3e6e1965f778cb080bd4b17b26a: Status 404 returned error can't find the container with id 4f8822c93561fffd8dacd39d3164d77e4f10f3e6e1965f778cb080bd4b17b26a Dec 09 12:21:37 crc kubenswrapper[4970]: I1209 12:21:37.737079 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5"] Dec 09 12:21:37 crc kubenswrapper[4970]: W1209 12:21:37.739742 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb85a505e_070c_425e_a78b_af7d0aed981f.slice/crio-9dabf45a336662f33f13227aa79d69769dd45b2bca667482c5bee472c4d102dc WatchSource:0}: Error finding container 9dabf45a336662f33f13227aa79d69769dd45b2bca667482c5bee472c4d102dc: Status 404 returned error can't find the container with id 9dabf45a336662f33f13227aa79d69769dd45b2bca667482c5bee472c4d102dc Dec 09 12:21:38 crc kubenswrapper[4970]: I1209 12:21:38.066225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" event={"ID":"74bf2616-c17d-4a12-89d4-416af30ff01a","Type":"ContainerStarted","Data":"4f8822c93561fffd8dacd39d3164d77e4f10f3e6e1965f778cb080bd4b17b26a"} Dec 09 12:21:38 crc kubenswrapper[4970]: I1209 12:21:38.067831 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-598d8f6f5c-75bbn" event={"ID":"a7013e9d-6527-41d3-a311-2f21b4961dfa","Type":"ContainerStarted","Data":"27df706cc720744990802816011bf9ab2c90f3299d7f2bfe7080a5342e6295d7"} Dec 09 12:21:38 crc kubenswrapper[4970]: I1209 12:21:38.067870 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-598d8f6f5c-75bbn" event={"ID":"a7013e9d-6527-41d3-a311-2f21b4961dfa","Type":"ContainerStarted","Data":"2febd0765eeed11c767d12f8a91e859eae7fac3940ad113f56d5594163e93f10"} Dec 09 12:21:38 crc kubenswrapper[4970]: I1209 12:21:38.070105 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" event={"ID":"b85a505e-070c-425e-a78b-af7d0aed981f","Type":"ContainerStarted","Data":"9dabf45a336662f33f13227aa79d69769dd45b2bca667482c5bee472c4d102dc"} Dec 09 12:21:38 crc kubenswrapper[4970]: I1209 12:21:38.091799 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-598d8f6f5c-75bbn" podStartSLOduration=2.091776047 podStartE2EDuration="2.091776047s" podCreationTimestamp="2025-12-09 12:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:21:38.086337982 +0000 UTC m=+910.646819033" watchObservedRunningTime="2025-12-09 12:21:38.091776047 +0000 UTC m=+910.652257088" Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.108284 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-zddxq" event={"ID":"364b73a8-4e46-496e-a629-2a9ec738c9ba","Type":"ContainerStarted","Data":"f5be39addb369bac7712ed0da84867d5707d0ba05a4450998a4f4f4ef2cc22b1"} Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.108867 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.119532 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" event={"ID":"74bf2616-c17d-4a12-89d4-416af30ff01a","Type":"ContainerStarted","Data":"0272caa29b85e95e177c5509285c0e7f8f080b595beadc10ae317eb71e986371"} Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.119682 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.121223 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" event={"ID":"4cbc766b-5faa-43b1-ab55-5d25e23ee20e","Type":"ContainerStarted","Data":"29f994fc2439bf0f64dab77fe791a0c36f8c908fbb77c2ecbddaf451db465659"} Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.127080 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-zddxq" podStartSLOduration=1.7780510870000001 podStartE2EDuration="5.127058092s" podCreationTimestamp="2025-12-09 12:21:36 +0000 UTC" firstStartedPulling="2025-12-09 12:21:36.605474063 +0000 UTC m=+909.165955114" lastFinishedPulling="2025-12-09 12:21:39.954481058 +0000 UTC m=+912.514962119" observedRunningTime="2025-12-09 12:21:41.123956848 +0000 UTC m=+913.684437909" watchObservedRunningTime="2025-12-09 12:21:41.127058092 +0000 UTC m=+913.687539183" Dec 09 12:21:41 crc kubenswrapper[4970]: I1209 12:21:41.151984 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" podStartSLOduration=2.706442108 podStartE2EDuration="5.151961803s" podCreationTimestamp="2025-12-09 12:21:36 +0000 UTC" firstStartedPulling="2025-12-09 12:21:37.508895942 +0000 UTC m=+910.069376993" lastFinishedPulling="2025-12-09 12:21:39.954415617 +0000 UTC m=+912.514896688" observedRunningTime="2025-12-09 12:21:41.141726797 +0000 UTC m=+913.702207888" watchObservedRunningTime="2025-12-09 12:21:41.151961803 +0000 UTC m=+913.712442864" Dec 09 12:21:42 crc kubenswrapper[4970]: I1209 12:21:42.129597 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" event={"ID":"b85a505e-070c-425e-a78b-af7d0aed981f","Type":"ContainerStarted","Data":"64cca8f7eccf4de4f4a237808882a466ba1878e19bb69fc2be17eb4f5a2298d5"} Dec 09 12:21:42 crc kubenswrapper[4970]: I1209 12:21:42.152539 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zflr5" podStartSLOduration=2.797197116 podStartE2EDuration="6.152519452s" podCreationTimestamp="2025-12-09 12:21:36 +0000 UTC" firstStartedPulling="2025-12-09 12:21:37.741239112 +0000 UTC m=+910.301720163" lastFinishedPulling="2025-12-09 12:21:41.096561458 +0000 UTC m=+913.657042499" observedRunningTime="2025-12-09 12:21:42.151011041 +0000 UTC m=+914.711492092" watchObservedRunningTime="2025-12-09 12:21:42.152519452 +0000 UTC m=+914.713000503" Dec 09 12:21:43 crc kubenswrapper[4970]: I1209 12:21:43.139199 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" event={"ID":"4cbc766b-5faa-43b1-ab55-5d25e23ee20e","Type":"ContainerStarted","Data":"be390cc1a1bfbd742d5639f7aeeb3c5e03f00d3efc210634bb9bb83b9245d297"} Dec 09 12:21:43 crc kubenswrapper[4970]: I1209 12:21:43.156919 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-jwgml" podStartSLOduration=1.622573417 podStartE2EDuration="7.156893023s" podCreationTimestamp="2025-12-09 12:21:36 +0000 UTC" firstStartedPulling="2025-12-09 12:21:37.018823744 +0000 UTC m=+909.579304795" lastFinishedPulling="2025-12-09 12:21:42.55314335 +0000 UTC m=+915.113624401" observedRunningTime="2025-12-09 12:21:43.152496004 +0000 UTC m=+915.712977075" watchObservedRunningTime="2025-12-09 12:21:43.156893023 +0000 UTC m=+915.717374094" Dec 09 12:21:46 crc kubenswrapper[4970]: I1209 12:21:46.011362 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:21:46 crc kubenswrapper[4970]: I1209 12:21:46.012757 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:21:46 crc kubenswrapper[4970]: I1209 12:21:46.465426 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-zddxq" Dec 09 12:21:46 crc kubenswrapper[4970]: I1209 12:21:46.978643 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:46 crc kubenswrapper[4970]: I1209 12:21:46.978719 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:46 crc kubenswrapper[4970]: I1209 12:21:46.983401 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:47 crc kubenswrapper[4970]: I1209 12:21:47.171362 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:21:47 crc kubenswrapper[4970]: I1209 12:21:47.241738 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67d97ddd48-7hj75"] Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.229179 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qx2sz"] Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.231117 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.239736 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qx2sz"] Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.313375 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7rj5\" (UniqueName: \"kubernetes.io/projected/42d1b09d-3a66-42bc-97b2-266311a384c7-kube-api-access-p7rj5\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.313423 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-catalog-content\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.313603 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-utilities\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.415202 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7rj5\" (UniqueName: \"kubernetes.io/projected/42d1b09d-3a66-42bc-97b2-266311a384c7-kube-api-access-p7rj5\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.415305 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-catalog-content\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.415766 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-catalog-content\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.415890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-utilities\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.416145 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-utilities\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.435522 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7rj5\" (UniqueName: \"kubernetes.io/projected/42d1b09d-3a66-42bc-97b2-266311a384c7-kube-api-access-p7rj5\") pod \"community-operators-qx2sz\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.569650 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:21:54 crc kubenswrapper[4970]: I1209 12:21:54.873834 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qx2sz"] Dec 09 12:21:55 crc kubenswrapper[4970]: I1209 12:21:55.229641 4970 generic.go:334] "Generic (PLEG): container finished" podID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerID="a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49" exitCode=0 Dec 09 12:21:55 crc kubenswrapper[4970]: I1209 12:21:55.229694 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerDied","Data":"a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49"} Dec 09 12:21:55 crc kubenswrapper[4970]: I1209 12:21:55.229726 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerStarted","Data":"87dd453580c30a2d24bcbfda71fbe86ae572b182bf6b6b5c7bc6beac0f9669b5"} Dec 09 12:21:55 crc kubenswrapper[4970]: I1209 12:21:55.231485 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:21:56 crc kubenswrapper[4970]: I1209 12:21:56.255314 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerStarted","Data":"ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da"} Dec 09 12:21:57 crc kubenswrapper[4970]: I1209 12:21:57.017933 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-pscqd" Dec 09 12:21:57 crc kubenswrapper[4970]: I1209 12:21:57.264139 4970 generic.go:334] "Generic (PLEG): container finished" podID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerID="ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da" exitCode=0 Dec 09 12:21:57 crc kubenswrapper[4970]: I1209 12:21:57.264192 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerDied","Data":"ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da"} Dec 09 12:21:58 crc kubenswrapper[4970]: I1209 12:21:58.283908 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerStarted","Data":"b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e"} Dec 09 12:22:04 crc kubenswrapper[4970]: I1209 12:22:04.569795 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:22:04 crc kubenswrapper[4970]: I1209 12:22:04.570654 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:22:04 crc kubenswrapper[4970]: I1209 12:22:04.650783 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:22:04 crc kubenswrapper[4970]: I1209 12:22:04.685810 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qx2sz" podStartSLOduration=8.275452944 podStartE2EDuration="10.68578778s" podCreationTimestamp="2025-12-09 12:21:54 +0000 UTC" firstStartedPulling="2025-12-09 12:21:55.231209397 +0000 UTC m=+927.791690448" lastFinishedPulling="2025-12-09 12:21:57.641544233 +0000 UTC m=+930.202025284" observedRunningTime="2025-12-09 12:21:58.307235539 +0000 UTC m=+930.867716590" watchObservedRunningTime="2025-12-09 12:22:04.68578778 +0000 UTC m=+937.246268841" Dec 09 12:22:05 crc kubenswrapper[4970]: I1209 12:22:05.431294 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:22:05 crc kubenswrapper[4970]: I1209 12:22:05.485767 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qx2sz"] Dec 09 12:22:07 crc kubenswrapper[4970]: I1209 12:22:07.386304 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qx2sz" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="registry-server" containerID="cri-o://b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e" gracePeriod=2 Dec 09 12:22:07 crc kubenswrapper[4970]: I1209 12:22:07.975341 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.142855 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7rj5\" (UniqueName: \"kubernetes.io/projected/42d1b09d-3a66-42bc-97b2-266311a384c7-kube-api-access-p7rj5\") pod \"42d1b09d-3a66-42bc-97b2-266311a384c7\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.143216 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-catalog-content\") pod \"42d1b09d-3a66-42bc-97b2-266311a384c7\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.143341 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-utilities\") pod \"42d1b09d-3a66-42bc-97b2-266311a384c7\" (UID: \"42d1b09d-3a66-42bc-97b2-266311a384c7\") " Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.144281 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-utilities" (OuterVolumeSpecName: "utilities") pod "42d1b09d-3a66-42bc-97b2-266311a384c7" (UID: "42d1b09d-3a66-42bc-97b2-266311a384c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.148379 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d1b09d-3a66-42bc-97b2-266311a384c7-kube-api-access-p7rj5" (OuterVolumeSpecName: "kube-api-access-p7rj5") pod "42d1b09d-3a66-42bc-97b2-266311a384c7" (UID: "42d1b09d-3a66-42bc-97b2-266311a384c7"). InnerVolumeSpecName "kube-api-access-p7rj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.213798 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42d1b09d-3a66-42bc-97b2-266311a384c7" (UID: "42d1b09d-3a66-42bc-97b2-266311a384c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.244663 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7rj5\" (UniqueName: \"kubernetes.io/projected/42d1b09d-3a66-42bc-97b2-266311a384c7-kube-api-access-p7rj5\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.244699 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.244711 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42d1b09d-3a66-42bc-97b2-266311a384c7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.399609 4970 generic.go:334] "Generic (PLEG): container finished" podID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerID="b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e" exitCode=0 Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.399650 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerDied","Data":"b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e"} Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.399675 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qx2sz" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.399740 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qx2sz" event={"ID":"42d1b09d-3a66-42bc-97b2-266311a384c7","Type":"ContainerDied","Data":"87dd453580c30a2d24bcbfda71fbe86ae572b182bf6b6b5c7bc6beac0f9669b5"} Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.399759 4970 scope.go:117] "RemoveContainer" containerID="b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.419331 4970 scope.go:117] "RemoveContainer" containerID="ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.437398 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qx2sz"] Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.444467 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qx2sz"] Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.455573 4970 scope.go:117] "RemoveContainer" containerID="a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.482451 4970 scope.go:117] "RemoveContainer" containerID="b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e" Dec 09 12:22:08 crc kubenswrapper[4970]: E1209 12:22:08.482782 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e\": container with ID starting with b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e not found: ID does not exist" containerID="b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.482813 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e"} err="failed to get container status \"b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e\": rpc error: code = NotFound desc = could not find container \"b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e\": container with ID starting with b57a8e6ec007f808ca5ea58c2860f936ad6710c4e1445f19834d13db8dee9f8e not found: ID does not exist" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.482836 4970 scope.go:117] "RemoveContainer" containerID="ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da" Dec 09 12:22:08 crc kubenswrapper[4970]: E1209 12:22:08.483056 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da\": container with ID starting with ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da not found: ID does not exist" containerID="ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.483077 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da"} err="failed to get container status \"ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da\": rpc error: code = NotFound desc = could not find container \"ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da\": container with ID starting with ea79653a2321b18f41e4ce1111d64798da5f396379abaeaf0f996b55119cf1da not found: ID does not exist" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.483092 4970 scope.go:117] "RemoveContainer" containerID="a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49" Dec 09 12:22:08 crc kubenswrapper[4970]: E1209 12:22:08.483420 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49\": container with ID starting with a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49 not found: ID does not exist" containerID="a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49" Dec 09 12:22:08 crc kubenswrapper[4970]: I1209 12:22:08.483443 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49"} err="failed to get container status \"a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49\": rpc error: code = NotFound desc = could not find container \"a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49\": container with ID starting with a7a7ac505eeb95474c392b9bf64ee5dbb0260c2b1efab00cc26f1c5440d20d49 not found: ID does not exist" Dec 09 12:22:09 crc kubenswrapper[4970]: I1209 12:22:09.821750 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" path="/var/lib/kubelet/pods/42d1b09d-3a66-42bc-97b2-266311a384c7/volumes" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.307399 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67d97ddd48-7hj75" podUID="0a80cf01-048b-4637-b6a4-eb88d9e6ecae" containerName="console" containerID="cri-o://5d9f29e2bbc30ea25f701ab62a7ec1761148162c9022cfdb15d8755174618774" gracePeriod=15 Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.433026 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67d97ddd48-7hj75_0a80cf01-048b-4637-b6a4-eb88d9e6ecae/console/0.log" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.433272 4970 generic.go:334] "Generic (PLEG): container finished" podID="0a80cf01-048b-4637-b6a4-eb88d9e6ecae" containerID="5d9f29e2bbc30ea25f701ab62a7ec1761148162c9022cfdb15d8755174618774" exitCode=2 Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.433300 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d97ddd48-7hj75" event={"ID":"0a80cf01-048b-4637-b6a4-eb88d9e6ecae","Type":"ContainerDied","Data":"5d9f29e2bbc30ea25f701ab62a7ec1761148162c9022cfdb15d8755174618774"} Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.760740 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67d97ddd48-7hj75_0a80cf01-048b-4637-b6a4-eb88d9e6ecae/console/0.log" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.760996 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936547 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rlgd\" (UniqueName: \"kubernetes.io/projected/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-kube-api-access-9rlgd\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936657 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-trusted-ca-bundle\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936694 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-service-ca\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936716 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-serving-cert\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936761 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-config\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936783 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-oauth-config\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.936833 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-oauth-serving-cert\") pod \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\" (UID: \"0a80cf01-048b-4637-b6a4-eb88d9e6ecae\") " Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.937826 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.937938 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.938070 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-config" (OuterVolumeSpecName: "console-config") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.938300 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-service-ca" (OuterVolumeSpecName: "service-ca") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.941919 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.950385 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:22:12 crc kubenswrapper[4970]: I1209 12:22:12.951518 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-kube-api-access-9rlgd" (OuterVolumeSpecName: "kube-api-access-9rlgd") pod "0a80cf01-048b-4637-b6a4-eb88d9e6ecae" (UID: "0a80cf01-048b-4637-b6a4-eb88d9e6ecae"). InnerVolumeSpecName "kube-api-access-9rlgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039037 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039084 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039102 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039116 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039137 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039148 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rlgd\" (UniqueName: \"kubernetes.io/projected/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-kube-api-access-9rlgd\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.039158 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a80cf01-048b-4637-b6a4-eb88d9e6ecae-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.443859 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67d97ddd48-7hj75_0a80cf01-048b-4637-b6a4-eb88d9e6ecae/console/0.log" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.444166 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67d97ddd48-7hj75" event={"ID":"0a80cf01-048b-4637-b6a4-eb88d9e6ecae","Type":"ContainerDied","Data":"6fcf01f2cf17fa92af031c5788fc173e96704ca38cd91d554b456a23f4ae0c85"} Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.444220 4970 scope.go:117] "RemoveContainer" containerID="5d9f29e2bbc30ea25f701ab62a7ec1761148162c9022cfdb15d8755174618774" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.444308 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67d97ddd48-7hj75" Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.493570 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67d97ddd48-7hj75"] Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.497145 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67d97ddd48-7hj75"] Dec 09 12:22:13 crc kubenswrapper[4970]: I1209 12:22:13.823813 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a80cf01-048b-4637-b6a4-eb88d9e6ecae" path="/var/lib/kubelet/pods/0a80cf01-048b-4637-b6a4-eb88d9e6ecae/volumes" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.861642 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5"] Dec 09 12:22:15 crc kubenswrapper[4970]: E1209 12:22:15.862539 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="extract-content" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.862556 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="extract-content" Dec 09 12:22:15 crc kubenswrapper[4970]: E1209 12:22:15.862587 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="registry-server" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.862596 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="registry-server" Dec 09 12:22:15 crc kubenswrapper[4970]: E1209 12:22:15.862610 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a80cf01-048b-4637-b6a4-eb88d9e6ecae" containerName="console" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.862618 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a80cf01-048b-4637-b6a4-eb88d9e6ecae" containerName="console" Dec 09 12:22:15 crc kubenswrapper[4970]: E1209 12:22:15.862627 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="extract-utilities" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.862635 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="extract-utilities" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.862812 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d1b09d-3a66-42bc-97b2-266311a384c7" containerName="registry-server" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.862824 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a80cf01-048b-4637-b6a4-eb88d9e6ecae" containerName="console" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.864271 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.867687 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.871107 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5"] Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.988414 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.988812 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfw8b\" (UniqueName: \"kubernetes.io/projected/264cbeb9-2733-4026-9d3a-f348b5af2ba2-kube-api-access-nfw8b\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:15 crc kubenswrapper[4970]: I1209 12:22:15.989394 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.011010 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.011065 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.091598 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfw8b\" (UniqueName: \"kubernetes.io/projected/264cbeb9-2733-4026-9d3a-f348b5af2ba2-kube-api-access-nfw8b\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.091646 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.091728 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.092188 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.092311 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.113968 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfw8b\" (UniqueName: \"kubernetes.io/projected/264cbeb9-2733-4026-9d3a-f348b5af2ba2-kube-api-access-nfw8b\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.183665 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:16 crc kubenswrapper[4970]: I1209 12:22:16.718876 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5"] Dec 09 12:22:17 crc kubenswrapper[4970]: I1209 12:22:17.478100 4970 generic.go:334] "Generic (PLEG): container finished" podID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerID="4831d0bbbdbb957abb0b7814612417dbbf05ab9ff5122d455b99ef0f65461d14" exitCode=0 Dec 09 12:22:17 crc kubenswrapper[4970]: I1209 12:22:17.478147 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" event={"ID":"264cbeb9-2733-4026-9d3a-f348b5af2ba2","Type":"ContainerDied","Data":"4831d0bbbdbb957abb0b7814612417dbbf05ab9ff5122d455b99ef0f65461d14"} Dec 09 12:22:17 crc kubenswrapper[4970]: I1209 12:22:17.478196 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" event={"ID":"264cbeb9-2733-4026-9d3a-f348b5af2ba2","Type":"ContainerStarted","Data":"812cd2ca6ff7f568813748991c9285e638cb0a2b9f140e85b4adeabeaab61572"} Dec 09 12:22:19 crc kubenswrapper[4970]: I1209 12:22:19.494843 4970 generic.go:334] "Generic (PLEG): container finished" podID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerID="8fa26532ffb706aed6f52330f4251f7f7d7c9b3c133c42c767ee97645e58d971" exitCode=0 Dec 09 12:22:19 crc kubenswrapper[4970]: I1209 12:22:19.494911 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" event={"ID":"264cbeb9-2733-4026-9d3a-f348b5af2ba2","Type":"ContainerDied","Data":"8fa26532ffb706aed6f52330f4251f7f7d7c9b3c133c42c767ee97645e58d971"} Dec 09 12:22:20 crc kubenswrapper[4970]: I1209 12:22:20.505019 4970 generic.go:334] "Generic (PLEG): container finished" podID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerID="4bed3b9626aadddb4aff950af995bc647053df662cf46e3929a5106ad410b3c6" exitCode=0 Dec 09 12:22:20 crc kubenswrapper[4970]: I1209 12:22:20.505994 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" event={"ID":"264cbeb9-2733-4026-9d3a-f348b5af2ba2","Type":"ContainerDied","Data":"4bed3b9626aadddb4aff950af995bc647053df662cf46e3929a5106ad410b3c6"} Dec 09 12:22:21 crc kubenswrapper[4970]: I1209 12:22:21.808342 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:21 crc kubenswrapper[4970]: I1209 12:22:21.995168 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfw8b\" (UniqueName: \"kubernetes.io/projected/264cbeb9-2733-4026-9d3a-f348b5af2ba2-kube-api-access-nfw8b\") pod \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " Dec 09 12:22:21 crc kubenswrapper[4970]: I1209 12:22:21.995530 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-bundle\") pod \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " Dec 09 12:22:21 crc kubenswrapper[4970]: I1209 12:22:21.995621 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-util\") pod \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\" (UID: \"264cbeb9-2733-4026-9d3a-f348b5af2ba2\") " Dec 09 12:22:21 crc kubenswrapper[4970]: I1209 12:22:21.996443 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-bundle" (OuterVolumeSpecName: "bundle") pod "264cbeb9-2733-4026-9d3a-f348b5af2ba2" (UID: "264cbeb9-2733-4026-9d3a-f348b5af2ba2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.004432 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/264cbeb9-2733-4026-9d3a-f348b5af2ba2-kube-api-access-nfw8b" (OuterVolumeSpecName: "kube-api-access-nfw8b") pod "264cbeb9-2733-4026-9d3a-f348b5af2ba2" (UID: "264cbeb9-2733-4026-9d3a-f348b5af2ba2"). InnerVolumeSpecName "kube-api-access-nfw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.006886 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfw8b\" (UniqueName: \"kubernetes.io/projected/264cbeb9-2733-4026-9d3a-f348b5af2ba2-kube-api-access-nfw8b\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.007194 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.013758 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-util" (OuterVolumeSpecName: "util") pod "264cbeb9-2733-4026-9d3a-f348b5af2ba2" (UID: "264cbeb9-2733-4026-9d3a-f348b5af2ba2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.108687 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/264cbeb9-2733-4026-9d3a-f348b5af2ba2-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.527168 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" event={"ID":"264cbeb9-2733-4026-9d3a-f348b5af2ba2","Type":"ContainerDied","Data":"812cd2ca6ff7f568813748991c9285e638cb0a2b9f140e85b4adeabeaab61572"} Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.527220 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="812cd2ca6ff7f568813748991c9285e638cb0a2b9f140e85b4adeabeaab61572" Dec 09 12:22:22 crc kubenswrapper[4970]: I1209 12:22:22.527321 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.987433 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr"] Dec 09 12:22:29 crc kubenswrapper[4970]: E1209 12:22:29.988371 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="util" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.988387 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="util" Dec 09 12:22:29 crc kubenswrapper[4970]: E1209 12:22:29.988407 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="extract" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.988416 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="extract" Dec 09 12:22:29 crc kubenswrapper[4970]: E1209 12:22:29.988441 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="pull" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.988449 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="pull" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.988636 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="264cbeb9-2733-4026-9d3a-f348b5af2ba2" containerName="extract" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.989321 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.992709 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-xpwk7" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.992713 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.993089 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.993239 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 09 12:22:29 crc kubenswrapper[4970]: I1209 12:22:29.993603 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.005905 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr"] Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.061237 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2phzm\" (UniqueName: \"kubernetes.io/projected/2f6777ac-ebe1-4757-9d6f-4bdced219b19-kube-api-access-2phzm\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.061332 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f6777ac-ebe1-4757-9d6f-4bdced219b19-apiservice-cert\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.061438 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f6777ac-ebe1-4757-9d6f-4bdced219b19-webhook-cert\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.162419 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f6777ac-ebe1-4757-9d6f-4bdced219b19-webhook-cert\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.162497 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2phzm\" (UniqueName: \"kubernetes.io/projected/2f6777ac-ebe1-4757-9d6f-4bdced219b19-kube-api-access-2phzm\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.162523 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f6777ac-ebe1-4757-9d6f-4bdced219b19-apiservice-cert\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.170278 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f6777ac-ebe1-4757-9d6f-4bdced219b19-webhook-cert\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.170736 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f6777ac-ebe1-4757-9d6f-4bdced219b19-apiservice-cert\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.182839 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2phzm\" (UniqueName: \"kubernetes.io/projected/2f6777ac-ebe1-4757-9d6f-4bdced219b19-kube-api-access-2phzm\") pod \"metallb-operator-controller-manager-5cc866bf98-fwgtr\" (UID: \"2f6777ac-ebe1-4757-9d6f-4bdced219b19\") " pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.252963 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7"] Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.254038 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.256718 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.257083 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wdmvg" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.272574 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7"] Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.283659 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.307695 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.368151 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wssh2\" (UniqueName: \"kubernetes.io/projected/5e09bbff-e2ce-423d-98e0-ee5305d47cce-kube-api-access-wssh2\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.368294 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e09bbff-e2ce-423d-98e0-ee5305d47cce-apiservice-cert\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.368323 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e09bbff-e2ce-423d-98e0-ee5305d47cce-webhook-cert\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.469657 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e09bbff-e2ce-423d-98e0-ee5305d47cce-apiservice-cert\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.469713 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e09bbff-e2ce-423d-98e0-ee5305d47cce-webhook-cert\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.469790 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wssh2\" (UniqueName: \"kubernetes.io/projected/5e09bbff-e2ce-423d-98e0-ee5305d47cce-kube-api-access-wssh2\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.478122 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e09bbff-e2ce-423d-98e0-ee5305d47cce-webhook-cert\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.487843 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5e09bbff-e2ce-423d-98e0-ee5305d47cce-apiservice-cert\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.501850 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wssh2\" (UniqueName: \"kubernetes.io/projected/5e09bbff-e2ce-423d-98e0-ee5305d47cce-kube-api-access-wssh2\") pod \"metallb-operator-webhook-server-7845f8588c-zdsl7\" (UID: \"5e09bbff-e2ce-423d-98e0-ee5305d47cce\") " pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.570731 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:30 crc kubenswrapper[4970]: I1209 12:22:30.926869 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr"] Dec 09 12:22:31 crc kubenswrapper[4970]: I1209 12:22:31.070524 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7"] Dec 09 12:22:31 crc kubenswrapper[4970]: W1209 12:22:31.071071 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e09bbff_e2ce_423d_98e0_ee5305d47cce.slice/crio-4479b7fd2a7c764944724c84f8e16439f1a77301d86417e886babf806fbf1e5d WatchSource:0}: Error finding container 4479b7fd2a7c764944724c84f8e16439f1a77301d86417e886babf806fbf1e5d: Status 404 returned error can't find the container with id 4479b7fd2a7c764944724c84f8e16439f1a77301d86417e886babf806fbf1e5d Dec 09 12:22:31 crc kubenswrapper[4970]: I1209 12:22:31.588615 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" event={"ID":"2f6777ac-ebe1-4757-9d6f-4bdced219b19","Type":"ContainerStarted","Data":"6ac81fdf079e3c10965ea0cae035a6b6a0642dbb07384db02fe8957d91ddef97"} Dec 09 12:22:31 crc kubenswrapper[4970]: I1209 12:22:31.589612 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" event={"ID":"5e09bbff-e2ce-423d-98e0-ee5305d47cce","Type":"ContainerStarted","Data":"4479b7fd2a7c764944724c84f8e16439f1a77301d86417e886babf806fbf1e5d"} Dec 09 12:22:37 crc kubenswrapper[4970]: I1209 12:22:37.641840 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" event={"ID":"5e09bbff-e2ce-423d-98e0-ee5305d47cce","Type":"ContainerStarted","Data":"54031587e12204343550ddadb81a42d5b00502ad9d67663d540ce325fbb59579"} Dec 09 12:22:37 crc kubenswrapper[4970]: I1209 12:22:37.643689 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:22:37 crc kubenswrapper[4970]: I1209 12:22:37.644998 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" event={"ID":"2f6777ac-ebe1-4757-9d6f-4bdced219b19","Type":"ContainerStarted","Data":"b78fcc4ec1984bc6d59c2b75b915256a4a6a432973e7de9481d489db6637e0f8"} Dec 09 12:22:37 crc kubenswrapper[4970]: I1209 12:22:37.645149 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:22:37 crc kubenswrapper[4970]: I1209 12:22:37.665096 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" podStartSLOduration=1.986441513 podStartE2EDuration="7.665069544s" podCreationTimestamp="2025-12-09 12:22:30 +0000 UTC" firstStartedPulling="2025-12-09 12:22:31.074363144 +0000 UTC m=+963.634844195" lastFinishedPulling="2025-12-09 12:22:36.752991175 +0000 UTC m=+969.313472226" observedRunningTime="2025-12-09 12:22:37.659326609 +0000 UTC m=+970.219807680" watchObservedRunningTime="2025-12-09 12:22:37.665069544 +0000 UTC m=+970.225550615" Dec 09 12:22:37 crc kubenswrapper[4970]: I1209 12:22:37.686591 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" podStartSLOduration=2.88894327 podStartE2EDuration="8.686569654s" podCreationTimestamp="2025-12-09 12:22:29 +0000 UTC" firstStartedPulling="2025-12-09 12:22:30.932329889 +0000 UTC m=+963.492810950" lastFinishedPulling="2025-12-09 12:22:36.729956283 +0000 UTC m=+969.290437334" observedRunningTime="2025-12-09 12:22:37.678005233 +0000 UTC m=+970.238486294" watchObservedRunningTime="2025-12-09 12:22:37.686569654 +0000 UTC m=+970.247050715" Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.010563 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.011118 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.011164 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.011886 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cba24c2dd5398483042c9e88f615ca653704b38e348947e32056cf594c3cf93e"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.011944 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://cba24c2dd5398483042c9e88f615ca653704b38e348947e32056cf594c3cf93e" gracePeriod=600 Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.709669 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="cba24c2dd5398483042c9e88f615ca653704b38e348947e32056cf594c3cf93e" exitCode=0 Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.710002 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"cba24c2dd5398483042c9e88f615ca653704b38e348947e32056cf594c3cf93e"} Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.710028 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"956b314977002e8d06761bbcdccd0bb4775a0aa2c665b4316e98475f27106ef3"} Dec 09 12:22:46 crc kubenswrapper[4970]: I1209 12:22:46.710044 4970 scope.go:117] "RemoveContainer" containerID="8a43c19be68e1c39db2fa2acbdc1174785af9071889cf2a0715c26d8f86ac8be" Dec 09 12:22:50 crc kubenswrapper[4970]: I1209 12:22:50.577002 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7845f8588c-zdsl7" Dec 09 12:23:10 crc kubenswrapper[4970]: I1209 12:23:10.310883 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5cc866bf98-fwgtr" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.100462 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q"] Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.101855 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.104671 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rw572" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.105095 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.123448 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q"] Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.135036 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-l4f9b"] Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.146185 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9321dbf0-1bca-4851-a191-4b1edaa50a77-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.146354 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pcb4\" (UniqueName: \"kubernetes.io/projected/9321dbf0-1bca-4851-a191-4b1edaa50a77-kube-api-access-6pcb4\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.158227 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.162628 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.163014 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.222689 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jhtq7"] Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.223975 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.227135 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.227347 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ldskr" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.227576 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.227799 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.229806 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-jz7r5"] Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.231692 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.233795 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.237142 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jz7r5"] Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247348 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9321dbf0-1bca-4851-a191-4b1edaa50a77-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247409 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34bec0ed-386a-4792-b315-133688468971-frr-startup\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247448 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34bec0ed-386a-4792-b315-133688468971-metrics-certs\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247502 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pcb4\" (UniqueName: \"kubernetes.io/projected/9321dbf0-1bca-4851-a191-4b1edaa50a77-kube-api-access-6pcb4\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247522 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-frr-sockets\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247536 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-frr-conf\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247560 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7c8c\" (UniqueName: \"kubernetes.io/projected/34bec0ed-386a-4792-b315-133688468971-kube-api-access-j7c8c\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247577 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-reloader\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.247593 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-metrics\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: E1209 12:23:11.247706 4970 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 09 12:23:11 crc kubenswrapper[4970]: E1209 12:23:11.247746 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9321dbf0-1bca-4851-a191-4b1edaa50a77-cert podName:9321dbf0-1bca-4851-a191-4b1edaa50a77 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:11.747731279 +0000 UTC m=+1004.308212330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9321dbf0-1bca-4851-a191-4b1edaa50a77-cert") pod "frr-k8s-webhook-server-7fcb986d4-m4p9q" (UID: "9321dbf0-1bca-4851-a191-4b1edaa50a77") : secret "frr-k8s-webhook-server-cert" not found Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.277608 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pcb4\" (UniqueName: \"kubernetes.io/projected/9321dbf0-1bca-4851-a191-4b1edaa50a77-kube-api-access-6pcb4\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.348954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34bec0ed-386a-4792-b315-133688468971-metrics-certs\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349058 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f892cb08-861f-422e-a771-d6f55d6d5756-cert\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349094 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-metallb-excludel2\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-frr-sockets\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349143 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-frr-conf\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349171 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7c8c\" (UniqueName: \"kubernetes.io/projected/34bec0ed-386a-4792-b315-133688468971-kube-api-access-j7c8c\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349194 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-reloader\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-metrics\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349277 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dlgl\" (UniqueName: \"kubernetes.io/projected/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-kube-api-access-9dlgl\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349299 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349368 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-metrics-certs\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349393 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34bec0ed-386a-4792-b315-133688468971-frr-startup\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349430 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvcrx\" (UniqueName: \"kubernetes.io/projected/f892cb08-861f-422e-a771-d6f55d6d5756-kube-api-access-cvcrx\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.349468 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f892cb08-861f-422e-a771-d6f55d6d5756-metrics-certs\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.350455 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-reloader\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.350766 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-frr-conf\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.352067 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/34bec0ed-386a-4792-b315-133688468971-frr-startup\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.353623 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/34bec0ed-386a-4792-b315-133688468971-metrics-certs\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.354382 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-metrics\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.354491 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/34bec0ed-386a-4792-b315-133688468971-frr-sockets\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.367790 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7c8c\" (UniqueName: \"kubernetes.io/projected/34bec0ed-386a-4792-b315-133688468971-kube-api-access-j7c8c\") pod \"frr-k8s-l4f9b\" (UID: \"34bec0ed-386a-4792-b315-133688468971\") " pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.450724 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dlgl\" (UniqueName: \"kubernetes.io/projected/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-kube-api-access-9dlgl\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.451098 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.451312 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-metrics-certs\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.451458 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvcrx\" (UniqueName: \"kubernetes.io/projected/f892cb08-861f-422e-a771-d6f55d6d5756-kube-api-access-cvcrx\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: E1209 12:23:11.451332 4970 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 12:23:11 crc kubenswrapper[4970]: E1209 12:23:11.451774 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist podName:70a13e7f-528a-42d6-aca8-c2b7ef94a8f7 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:11.951751618 +0000 UTC m=+1004.512232669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist") pod "speaker-jhtq7" (UID: "70a13e7f-528a-42d6-aca8-c2b7ef94a8f7") : secret "metallb-memberlist" not found Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.451610 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f892cb08-861f-422e-a771-d6f55d6d5756-metrics-certs\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.452060 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f892cb08-861f-422e-a771-d6f55d6d5756-cert\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.452208 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-metallb-excludel2\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.453062 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-metallb-excludel2\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.456410 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f892cb08-861f-422e-a771-d6f55d6d5756-cert\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.458314 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-metrics-certs\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.461178 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f892cb08-861f-422e-a771-d6f55d6d5756-metrics-certs\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.479165 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dlgl\" (UniqueName: \"kubernetes.io/projected/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-kube-api-access-9dlgl\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.483064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvcrx\" (UniqueName: \"kubernetes.io/projected/f892cb08-861f-422e-a771-d6f55d6d5756-kube-api-access-cvcrx\") pod \"controller-f8648f98b-jz7r5\" (UID: \"f892cb08-861f-422e-a771-d6f55d6d5756\") " pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.491374 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.550715 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.756383 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9321dbf0-1bca-4851-a191-4b1edaa50a77-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.761355 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9321dbf0-1bca-4851-a191-4b1edaa50a77-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-m4p9q\" (UID: \"9321dbf0-1bca-4851-a191-4b1edaa50a77\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.895236 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"d2b4a47fe8e9bd590d9fb1b3fd81bcc20160d6ce1be35a69752b628caa335a15"} Dec 09 12:23:11 crc kubenswrapper[4970]: I1209 12:23:11.960524 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:11 crc kubenswrapper[4970]: E1209 12:23:11.960759 4970 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 12:23:11 crc kubenswrapper[4970]: E1209 12:23:11.960844 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist podName:70a13e7f-528a-42d6-aca8-c2b7ef94a8f7 nodeName:}" failed. No retries permitted until 2025-12-09 12:23:12.960820695 +0000 UTC m=+1005.521301756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist") pod "speaker-jhtq7" (UID: "70a13e7f-528a-42d6-aca8-c2b7ef94a8f7") : secret "metallb-memberlist" not found Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.005644 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jz7r5"] Dec 09 12:23:12 crc kubenswrapper[4970]: W1209 12:23:12.010461 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf892cb08_861f_422e_a771_d6f55d6d5756.slice/crio-e7af6b2e4d2ad6566d6d5e555ebc5eea20ccf96acc8c0b489ae073133ea1ade5 WatchSource:0}: Error finding container e7af6b2e4d2ad6566d6d5e555ebc5eea20ccf96acc8c0b489ae073133ea1ade5: Status 404 returned error can't find the container with id e7af6b2e4d2ad6566d6d5e555ebc5eea20ccf96acc8c0b489ae073133ea1ade5 Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.024595 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.481176 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q"] Dec 09 12:23:12 crc kubenswrapper[4970]: W1209 12:23:12.490457 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9321dbf0_1bca_4851_a191_4b1edaa50a77.slice/crio-9cb0061e6cc8b7fe89150e95901fc270aa7140ec48727e5a80098c3ba5bd04f0 WatchSource:0}: Error finding container 9cb0061e6cc8b7fe89150e95901fc270aa7140ec48727e5a80098c3ba5bd04f0: Status 404 returned error can't find the container with id 9cb0061e6cc8b7fe89150e95901fc270aa7140ec48727e5a80098c3ba5bd04f0 Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.902998 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" event={"ID":"9321dbf0-1bca-4851-a191-4b1edaa50a77","Type":"ContainerStarted","Data":"9cb0061e6cc8b7fe89150e95901fc270aa7140ec48727e5a80098c3ba5bd04f0"} Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.905103 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jz7r5" event={"ID":"f892cb08-861f-422e-a771-d6f55d6d5756","Type":"ContainerStarted","Data":"ee791a0885ffd59b12a2427f9f5dfa83bc51205356657f10d2ea174399b0a9a5"} Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.905147 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jz7r5" event={"ID":"f892cb08-861f-422e-a771-d6f55d6d5756","Type":"ContainerStarted","Data":"4eba5184b7f53fcf2f2564d0d4a9eb7b0c4e07c59eabc129dc4d04a4d1f842fc"} Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.905156 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jz7r5" event={"ID":"f892cb08-861f-422e-a771-d6f55d6d5756","Type":"ContainerStarted","Data":"e7af6b2e4d2ad6566d6d5e555ebc5eea20ccf96acc8c0b489ae073133ea1ade5"} Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.906185 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.939649 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-jz7r5" podStartSLOduration=1.939627066 podStartE2EDuration="1.939627066s" podCreationTimestamp="2025-12-09 12:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:23:12.936657226 +0000 UTC m=+1005.497138277" watchObservedRunningTime="2025-12-09 12:23:12.939627066 +0000 UTC m=+1005.500108117" Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.979255 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:12 crc kubenswrapper[4970]: I1209 12:23:12.985781 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/70a13e7f-528a-42d6-aca8-c2b7ef94a8f7-memberlist\") pod \"speaker-jhtq7\" (UID: \"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7\") " pod="metallb-system/speaker-jhtq7" Dec 09 12:23:13 crc kubenswrapper[4970]: I1209 12:23:13.044963 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jhtq7" Dec 09 12:23:13 crc kubenswrapper[4970]: W1209 12:23:13.071516 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70a13e7f_528a_42d6_aca8_c2b7ef94a8f7.slice/crio-bc56fbb376063c7a88895f492acb5cc54676684f3924f1c3b374ebd57c12fb92 WatchSource:0}: Error finding container bc56fbb376063c7a88895f492acb5cc54676684f3924f1c3b374ebd57c12fb92: Status 404 returned error can't find the container with id bc56fbb376063c7a88895f492acb5cc54676684f3924f1c3b374ebd57c12fb92 Dec 09 12:23:13 crc kubenswrapper[4970]: I1209 12:23:13.927986 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhtq7" event={"ID":"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7","Type":"ContainerStarted","Data":"b063ea1ec7fd21e8d9da7bfe22424b8ff6f51a1afa0e045f88d60ee58e8d65b3"} Dec 09 12:23:13 crc kubenswrapper[4970]: I1209 12:23:13.928346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhtq7" event={"ID":"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7","Type":"ContainerStarted","Data":"916d26e5a827ac1ffad202b9600287442e14f24f2074f296fb3cc8db4b5db536"} Dec 09 12:23:13 crc kubenswrapper[4970]: I1209 12:23:13.928364 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jhtq7" event={"ID":"70a13e7f-528a-42d6-aca8-c2b7ef94a8f7","Type":"ContainerStarted","Data":"bc56fbb376063c7a88895f492acb5cc54676684f3924f1c3b374ebd57c12fb92"} Dec 09 12:23:13 crc kubenswrapper[4970]: I1209 12:23:13.929120 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jhtq7" Dec 09 12:23:13 crc kubenswrapper[4970]: I1209 12:23:13.960945 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jhtq7" podStartSLOduration=2.960921914 podStartE2EDuration="2.960921914s" podCreationTimestamp="2025-12-09 12:23:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:23:13.95521693 +0000 UTC m=+1006.515697981" watchObservedRunningTime="2025-12-09 12:23:13.960921914 +0000 UTC m=+1006.521402965" Dec 09 12:23:23 crc kubenswrapper[4970]: I1209 12:23:23.048811 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jhtq7" Dec 09 12:23:25 crc kubenswrapper[4970]: I1209 12:23:25.035949 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" event={"ID":"9321dbf0-1bca-4851-a191-4b1edaa50a77","Type":"ContainerStarted","Data":"cc8978558ad8464d45b9304c90fc8ed6069f6404cdea2c45cec0f6246c72a0d5"} Dec 09 12:23:25 crc kubenswrapper[4970]: I1209 12:23:25.036567 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:25 crc kubenswrapper[4970]: I1209 12:23:25.038136 4970 generic.go:334] "Generic (PLEG): container finished" podID="34bec0ed-386a-4792-b315-133688468971" containerID="210b09dc5750d9aed444328136acafaea1450a287b9a506a19f1b5f4ffe87411" exitCode=0 Dec 09 12:23:25 crc kubenswrapper[4970]: I1209 12:23:25.038177 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerDied","Data":"210b09dc5750d9aed444328136acafaea1450a287b9a506a19f1b5f4ffe87411"} Dec 09 12:23:25 crc kubenswrapper[4970]: I1209 12:23:25.056605 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" podStartSLOduration=3.226513465 podStartE2EDuration="14.056584991s" podCreationTimestamp="2025-12-09 12:23:11 +0000 UTC" firstStartedPulling="2025-12-09 12:23:12.492208534 +0000 UTC m=+1005.052689585" lastFinishedPulling="2025-12-09 12:23:23.32228005 +0000 UTC m=+1015.882761111" observedRunningTime="2025-12-09 12:23:25.052433629 +0000 UTC m=+1017.612914680" watchObservedRunningTime="2025-12-09 12:23:25.056584991 +0000 UTC m=+1017.617066042" Dec 09 12:23:26 crc kubenswrapper[4970]: I1209 12:23:26.046873 4970 generic.go:334] "Generic (PLEG): container finished" podID="34bec0ed-386a-4792-b315-133688468971" containerID="ebff6e517f073e50e281562f9e6c295a0d6f37cf8abe3a417296ba28c4e55e85" exitCode=0 Dec 09 12:23:26 crc kubenswrapper[4970]: I1209 12:23:26.046946 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerDied","Data":"ebff6e517f073e50e281562f9e6c295a0d6f37cf8abe3a417296ba28c4e55e85"} Dec 09 12:23:27 crc kubenswrapper[4970]: I1209 12:23:27.056629 4970 generic.go:334] "Generic (PLEG): container finished" podID="34bec0ed-386a-4792-b315-133688468971" containerID="8b64606bbab023d55341f546fc348b08293e493d33628635fa55c95a52912606" exitCode=0 Dec 09 12:23:27 crc kubenswrapper[4970]: I1209 12:23:27.056689 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerDied","Data":"8b64606bbab023d55341f546fc348b08293e493d33628635fa55c95a52912606"} Dec 09 12:23:28 crc kubenswrapper[4970]: I1209 12:23:28.065828 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"1d54662092e3f706690bd2fb8170b3b167fec9dc025a35b9d3b4bf28d1cdc3af"} Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.075588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"1b95cfe9a538324b50220859d6642e213cfb488108f46ac1a16561e7f143e8f5"} Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.468057 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fdgwx"] Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.472600 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.476315 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-5xxmk" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.476395 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.476593 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.479484 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fdgwx"] Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.621789 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6d7g\" (UniqueName: \"kubernetes.io/projected/c72f6ed1-544f-42d0-b188-70905a3ea004-kube-api-access-x6d7g\") pod \"openstack-operator-index-fdgwx\" (UID: \"c72f6ed1-544f-42d0-b188-70905a3ea004\") " pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.723848 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6d7g\" (UniqueName: \"kubernetes.io/projected/c72f6ed1-544f-42d0-b188-70905a3ea004-kube-api-access-x6d7g\") pod \"openstack-operator-index-fdgwx\" (UID: \"c72f6ed1-544f-42d0-b188-70905a3ea004\") " pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.745392 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6d7g\" (UniqueName: \"kubernetes.io/projected/c72f6ed1-544f-42d0-b188-70905a3ea004-kube-api-access-x6d7g\") pod \"openstack-operator-index-fdgwx\" (UID: \"c72f6ed1-544f-42d0-b188-70905a3ea004\") " pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:29 crc kubenswrapper[4970]: I1209 12:23:29.791916 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:30 crc kubenswrapper[4970]: I1209 12:23:30.087315 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"72b28362c52158281e1ed89e22f6ff28b7e1f2cb0c39449da2c25fd1fd7e9459"} Dec 09 12:23:30 crc kubenswrapper[4970]: I1209 12:23:30.299677 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fdgwx"] Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.097849 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fdgwx" event={"ID":"c72f6ed1-544f-42d0-b188-70905a3ea004","Type":"ContainerStarted","Data":"d51c2fe1fe17e1592f8af7f2805f5a65db72fd3ccf6e7a363dc6e6684a7e29ba"} Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.102608 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"fdf4fe430ef66ceaf804cde0aac53b49e559e2dff5c7f0d18fe2c3bccf458f80"} Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.102660 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"03ba7722044e994570ec8d962ead45628026597e49d5f20a4af7ab4eda3b2dc2"} Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.102671 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-l4f9b" event={"ID":"34bec0ed-386a-4792-b315-133688468971","Type":"ContainerStarted","Data":"9e1b843ad08c6cec5a0cb4cc43adc606121e1c38e2cd79b37a8f550f40557c38"} Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.102779 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.126355 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-l4f9b" podStartSLOduration=8.46941515 podStartE2EDuration="20.126338634s" podCreationTimestamp="2025-12-09 12:23:11 +0000 UTC" firstStartedPulling="2025-12-09 12:23:11.630125425 +0000 UTC m=+1004.190606476" lastFinishedPulling="2025-12-09 12:23:23.287048889 +0000 UTC m=+1015.847529960" observedRunningTime="2025-12-09 12:23:31.125691097 +0000 UTC m=+1023.686172148" watchObservedRunningTime="2025-12-09 12:23:31.126338634 +0000 UTC m=+1023.686819685" Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.492680 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.539054 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:31 crc kubenswrapper[4970]: I1209 12:23:31.558117 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-jz7r5" Dec 09 12:23:34 crc kubenswrapper[4970]: I1209 12:23:34.663757 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fdgwx"] Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.273896 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-pjlbc"] Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.275106 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.290709 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pjlbc"] Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.421746 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9569\" (UniqueName: \"kubernetes.io/projected/7013b411-c4c4-4706-b5f3-da18ffccb4e5-kube-api-access-k9569\") pod \"openstack-operator-index-pjlbc\" (UID: \"7013b411-c4c4-4706-b5f3-da18ffccb4e5\") " pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.524292 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9569\" (UniqueName: \"kubernetes.io/projected/7013b411-c4c4-4706-b5f3-da18ffccb4e5-kube-api-access-k9569\") pod \"openstack-operator-index-pjlbc\" (UID: \"7013b411-c4c4-4706-b5f3-da18ffccb4e5\") " pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.547079 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9569\" (UniqueName: \"kubernetes.io/projected/7013b411-c4c4-4706-b5f3-da18ffccb4e5-kube-api-access-k9569\") pod \"openstack-operator-index-pjlbc\" (UID: \"7013b411-c4c4-4706-b5f3-da18ffccb4e5\") " pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:35 crc kubenswrapper[4970]: I1209 12:23:35.614166 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:37 crc kubenswrapper[4970]: I1209 12:23:37.352454 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-pjlbc"] Dec 09 12:23:41 crc kubenswrapper[4970]: I1209 12:23:41.184905 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pjlbc" event={"ID":"7013b411-c4c4-4706-b5f3-da18ffccb4e5","Type":"ContainerStarted","Data":"4e130339d5f8ac30f08b073c5dab13cde2e739a27519f19186e5a83f89c151f7"} Dec 09 12:23:41 crc kubenswrapper[4970]: I1209 12:23:41.498229 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-l4f9b" Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.036170 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-m4p9q" Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.192120 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-pjlbc" event={"ID":"7013b411-c4c4-4706-b5f3-da18ffccb4e5","Type":"ContainerStarted","Data":"aee102382c7eeb0928ebd33a9a7c226ed7015beb3f72e165339b36455bf8b307"} Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.194980 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fdgwx" event={"ID":"c72f6ed1-544f-42d0-b188-70905a3ea004","Type":"ContainerStarted","Data":"75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319"} Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.195071 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fdgwx" podUID="c72f6ed1-544f-42d0-b188-70905a3ea004" containerName="registry-server" containerID="cri-o://75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319" gracePeriod=2 Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.211887 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-pjlbc" podStartSLOduration=7.060978835 podStartE2EDuration="7.21186737s" podCreationTimestamp="2025-12-09 12:23:35 +0000 UTC" firstStartedPulling="2025-12-09 12:23:40.903599242 +0000 UTC m=+1033.464080293" lastFinishedPulling="2025-12-09 12:23:41.054487777 +0000 UTC m=+1033.614968828" observedRunningTime="2025-12-09 12:23:42.208810327 +0000 UTC m=+1034.769291378" watchObservedRunningTime="2025-12-09 12:23:42.21186737 +0000 UTC m=+1034.772348421" Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.229508 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fdgwx" podStartSLOduration=2.552867632 podStartE2EDuration="13.229489016s" podCreationTimestamp="2025-12-09 12:23:29 +0000 UTC" firstStartedPulling="2025-12-09 12:23:30.388636254 +0000 UTC m=+1022.949117305" lastFinishedPulling="2025-12-09 12:23:41.065257638 +0000 UTC m=+1033.625738689" observedRunningTime="2025-12-09 12:23:42.226883365 +0000 UTC m=+1034.787364426" watchObservedRunningTime="2025-12-09 12:23:42.229489016 +0000 UTC m=+1034.789970067" Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.691782 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.872914 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6d7g\" (UniqueName: \"kubernetes.io/projected/c72f6ed1-544f-42d0-b188-70905a3ea004-kube-api-access-x6d7g\") pod \"c72f6ed1-544f-42d0-b188-70905a3ea004\" (UID: \"c72f6ed1-544f-42d0-b188-70905a3ea004\") " Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.880446 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72f6ed1-544f-42d0-b188-70905a3ea004-kube-api-access-x6d7g" (OuterVolumeSpecName: "kube-api-access-x6d7g") pod "c72f6ed1-544f-42d0-b188-70905a3ea004" (UID: "c72f6ed1-544f-42d0-b188-70905a3ea004"). InnerVolumeSpecName "kube-api-access-x6d7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:23:42 crc kubenswrapper[4970]: I1209 12:23:42.975306 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6d7g\" (UniqueName: \"kubernetes.io/projected/c72f6ed1-544f-42d0-b188-70905a3ea004-kube-api-access-x6d7g\") on node \"crc\" DevicePath \"\"" Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.205332 4970 generic.go:334] "Generic (PLEG): container finished" podID="c72f6ed1-544f-42d0-b188-70905a3ea004" containerID="75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319" exitCode=0 Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.205427 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fdgwx" event={"ID":"c72f6ed1-544f-42d0-b188-70905a3ea004","Type":"ContainerDied","Data":"75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319"} Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.205489 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fdgwx" event={"ID":"c72f6ed1-544f-42d0-b188-70905a3ea004","Type":"ContainerDied","Data":"d51c2fe1fe17e1592f8af7f2805f5a65db72fd3ccf6e7a363dc6e6684a7e29ba"} Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.205513 4970 scope.go:117] "RemoveContainer" containerID="75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319" Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.205447 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fdgwx" Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.230434 4970 scope.go:117] "RemoveContainer" containerID="75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319" Dec 09 12:23:43 crc kubenswrapper[4970]: E1209 12:23:43.232635 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319\": container with ID starting with 75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319 not found: ID does not exist" containerID="75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319" Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.232689 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319"} err="failed to get container status \"75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319\": rpc error: code = NotFound desc = could not find container \"75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319\": container with ID starting with 75ae50d89cc5e5944c20c51288f2794e07c94a9a208c7f146935ba30202c7319 not found: ID does not exist" Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.240356 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fdgwx"] Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.246959 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fdgwx"] Dec 09 12:23:43 crc kubenswrapper[4970]: I1209 12:23:43.823866 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c72f6ed1-544f-42d0-b188-70905a3ea004" path="/var/lib/kubelet/pods/c72f6ed1-544f-42d0-b188-70905a3ea004/volumes" Dec 09 12:23:45 crc kubenswrapper[4970]: I1209 12:23:45.615306 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:45 crc kubenswrapper[4970]: I1209 12:23:45.615633 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:45 crc kubenswrapper[4970]: I1209 12:23:45.652035 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:23:46 crc kubenswrapper[4970]: I1209 12:23:46.272601 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-pjlbc" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.120436 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2"] Dec 09 12:24:00 crc kubenswrapper[4970]: E1209 12:24:00.121215 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f6ed1-544f-42d0-b188-70905a3ea004" containerName="registry-server" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.121229 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f6ed1-544f-42d0-b188-70905a3ea004" containerName="registry-server" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.121460 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c72f6ed1-544f-42d0-b188-70905a3ea004" containerName="registry-server" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.122804 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.131423 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2"] Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.183121 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jh8x7" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.290602 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-util\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.290693 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-bundle\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.290752 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf7dz\" (UniqueName: \"kubernetes.io/projected/3ea53e6e-c8b4-430b-a285-fe671716853a-kube-api-access-lf7dz\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.392884 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-util\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.393002 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-bundle\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.393061 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf7dz\" (UniqueName: \"kubernetes.io/projected/3ea53e6e-c8b4-430b-a285-fe671716853a-kube-api-access-lf7dz\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.393474 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-util\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.393502 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-bundle\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.420651 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf7dz\" (UniqueName: \"kubernetes.io/projected/3ea53e6e-c8b4-430b-a285-fe671716853a-kube-api-access-lf7dz\") pod \"9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.505484 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:00 crc kubenswrapper[4970]: I1209 12:24:00.980864 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2"] Dec 09 12:24:01 crc kubenswrapper[4970]: I1209 12:24:01.379086 4970 generic.go:334] "Generic (PLEG): container finished" podID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerID="5461992ef3d01a35676300acf4d7cda4735f979ffeef1b50476bde5dac480f5c" exitCode=0 Dec 09 12:24:01 crc kubenswrapper[4970]: I1209 12:24:01.379135 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" event={"ID":"3ea53e6e-c8b4-430b-a285-fe671716853a","Type":"ContainerDied","Data":"5461992ef3d01a35676300acf4d7cda4735f979ffeef1b50476bde5dac480f5c"} Dec 09 12:24:01 crc kubenswrapper[4970]: I1209 12:24:01.379166 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" event={"ID":"3ea53e6e-c8b4-430b-a285-fe671716853a","Type":"ContainerStarted","Data":"7263e1b7ddf2c48d27dc8637a9ffb7af76a1d988f07a249ceac193f53b25684d"} Dec 09 12:24:02 crc kubenswrapper[4970]: I1209 12:24:02.389461 4970 generic.go:334] "Generic (PLEG): container finished" podID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerID="4c829360efc0e54c3ec379010aa170f0e4e88b93b48d49b2b7e1baa2381163bc" exitCode=0 Dec 09 12:24:02 crc kubenswrapper[4970]: I1209 12:24:02.389598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" event={"ID":"3ea53e6e-c8b4-430b-a285-fe671716853a","Type":"ContainerDied","Data":"4c829360efc0e54c3ec379010aa170f0e4e88b93b48d49b2b7e1baa2381163bc"} Dec 09 12:24:03 crc kubenswrapper[4970]: I1209 12:24:03.405387 4970 generic.go:334] "Generic (PLEG): container finished" podID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerID="b3dec01eccf2f63ca7d2a7b4c73d8d6b847482c918eb89ac877be98a966c36e8" exitCode=0 Dec 09 12:24:03 crc kubenswrapper[4970]: I1209 12:24:03.405482 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" event={"ID":"3ea53e6e-c8b4-430b-a285-fe671716853a","Type":"ContainerDied","Data":"b3dec01eccf2f63ca7d2a7b4c73d8d6b847482c918eb89ac877be98a966c36e8"} Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.798895 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.886831 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-bundle\") pod \"3ea53e6e-c8b4-430b-a285-fe671716853a\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.886916 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-util\") pod \"3ea53e6e-c8b4-430b-a285-fe671716853a\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.887032 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf7dz\" (UniqueName: \"kubernetes.io/projected/3ea53e6e-c8b4-430b-a285-fe671716853a-kube-api-access-lf7dz\") pod \"3ea53e6e-c8b4-430b-a285-fe671716853a\" (UID: \"3ea53e6e-c8b4-430b-a285-fe671716853a\") " Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.887842 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-bundle" (OuterVolumeSpecName: "bundle") pod "3ea53e6e-c8b4-430b-a285-fe671716853a" (UID: "3ea53e6e-c8b4-430b-a285-fe671716853a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.899461 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea53e6e-c8b4-430b-a285-fe671716853a-kube-api-access-lf7dz" (OuterVolumeSpecName: "kube-api-access-lf7dz") pod "3ea53e6e-c8b4-430b-a285-fe671716853a" (UID: "3ea53e6e-c8b4-430b-a285-fe671716853a"). InnerVolumeSpecName "kube-api-access-lf7dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.900941 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-util" (OuterVolumeSpecName: "util") pod "3ea53e6e-c8b4-430b-a285-fe671716853a" (UID: "3ea53e6e-c8b4-430b-a285-fe671716853a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.988946 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-util\") on node \"crc\" DevicePath \"\"" Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.989002 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf7dz\" (UniqueName: \"kubernetes.io/projected/3ea53e6e-c8b4-430b-a285-fe671716853a-kube-api-access-lf7dz\") on node \"crc\" DevicePath \"\"" Dec 09 12:24:04 crc kubenswrapper[4970]: I1209 12:24:04.989024 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea53e6e-c8b4-430b-a285-fe671716853a-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:24:05 crc kubenswrapper[4970]: I1209 12:24:05.430519 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" event={"ID":"3ea53e6e-c8b4-430b-a285-fe671716853a","Type":"ContainerDied","Data":"7263e1b7ddf2c48d27dc8637a9ffb7af76a1d988f07a249ceac193f53b25684d"} Dec 09 12:24:05 crc kubenswrapper[4970]: I1209 12:24:05.430586 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7263e1b7ddf2c48d27dc8637a9ffb7af76a1d988f07a249ceac193f53b25684d" Dec 09 12:24:05 crc kubenswrapper[4970]: I1209 12:24:05.430706 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.245649 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf"] Dec 09 12:24:09 crc kubenswrapper[4970]: E1209 12:24:09.246532 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="util" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.246549 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="util" Dec 09 12:24:09 crc kubenswrapper[4970]: E1209 12:24:09.246569 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="extract" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.246575 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="extract" Dec 09 12:24:09 crc kubenswrapper[4970]: E1209 12:24:09.246599 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="pull" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.246606 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="pull" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.246742 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea53e6e-c8b4-430b-a285-fe671716853a" containerName="extract" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.247357 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.249794 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-jcj5g" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.276484 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf"] Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.357966 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrm8\" (UniqueName: \"kubernetes.io/projected/b972f2ed-23f7-46f1-85c1-ddc586fee0a6-kube-api-access-jvrm8\") pod \"openstack-operator-controller-operator-6979fbd8bc-8wfhf\" (UID: \"b972f2ed-23f7-46f1-85c1-ddc586fee0a6\") " pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.459188 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrm8\" (UniqueName: \"kubernetes.io/projected/b972f2ed-23f7-46f1-85c1-ddc586fee0a6-kube-api-access-jvrm8\") pod \"openstack-operator-controller-operator-6979fbd8bc-8wfhf\" (UID: \"b972f2ed-23f7-46f1-85c1-ddc586fee0a6\") " pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.482423 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrm8\" (UniqueName: \"kubernetes.io/projected/b972f2ed-23f7-46f1-85c1-ddc586fee0a6-kube-api-access-jvrm8\") pod \"openstack-operator-controller-operator-6979fbd8bc-8wfhf\" (UID: \"b972f2ed-23f7-46f1-85c1-ddc586fee0a6\") " pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:09 crc kubenswrapper[4970]: I1209 12:24:09.564142 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:10 crc kubenswrapper[4970]: I1209 12:24:10.112205 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf"] Dec 09 12:24:10 crc kubenswrapper[4970]: I1209 12:24:10.474156 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" event={"ID":"b972f2ed-23f7-46f1-85c1-ddc586fee0a6","Type":"ContainerStarted","Data":"d751476010eeab93b1672ba1f751b6fca315d177b4cb36b64381608fa51e5a5e"} Dec 09 12:24:17 crc kubenswrapper[4970]: I1209 12:24:17.537700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" event={"ID":"b972f2ed-23f7-46f1-85c1-ddc586fee0a6","Type":"ContainerStarted","Data":"acbd064886bc008714f9483c03d34b779bed57f492f29aea815b9d0fa47c709b"} Dec 09 12:24:17 crc kubenswrapper[4970]: I1209 12:24:17.538350 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:17 crc kubenswrapper[4970]: I1209 12:24:17.570385 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" podStartSLOduration=1.5786848949999999 podStartE2EDuration="8.570369487s" podCreationTimestamp="2025-12-09 12:24:09 +0000 UTC" firstStartedPulling="2025-12-09 12:24:10.113101674 +0000 UTC m=+1062.673582725" lastFinishedPulling="2025-12-09 12:24:17.104786266 +0000 UTC m=+1069.665267317" observedRunningTime="2025-12-09 12:24:17.564667601 +0000 UTC m=+1070.125148652" watchObservedRunningTime="2025-12-09 12:24:17.570369487 +0000 UTC m=+1070.130850538" Dec 09 12:24:29 crc kubenswrapper[4970]: I1209 12:24:29.566938 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-6979fbd8bc-8wfhf" Dec 09 12:24:46 crc kubenswrapper[4970]: I1209 12:24:46.011373 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:24:46 crc kubenswrapper[4970]: I1209 12:24:46.013030 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.350959 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.353351 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.356687 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-z7xkf" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.358926 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.360928 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.363359 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-tthzw" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.368630 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.370587 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.374930 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-bbbs7" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.380638 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.400737 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.436803 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.456315 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.457994 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.464809 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-kvm2t" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.465000 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.467458 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.473562 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-spkhx" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.475784 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.480691 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.481715 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.486401 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4n6w7" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.488023 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.493808 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.496670 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.504783 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.510569 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.510890 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4qtrx" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.511200 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.528956 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw29m\" (UniqueName: \"kubernetes.io/projected/ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e-kube-api-access-rw29m\") pod \"horizon-operator-controller-manager-68c6d99b8f-vf6rm\" (UID: \"ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529009 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmzx\" (UniqueName: \"kubernetes.io/projected/9a6da1be-c547-49c2-839c-aa549a5bb32b-kube-api-access-jjmzx\") pod \"barbican-operator-controller-manager-7d9dfd778-hgtz4\" (UID: \"9a6da1be-c547-49c2-839c-aa549a5bb32b\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529042 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdfj6\" (UniqueName: \"kubernetes.io/projected/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-kube-api-access-zdfj6\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529071 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffpxr\" (UniqueName: \"kubernetes.io/projected/c60ee2ea-0489-422a-b829-e20040144965-kube-api-access-ffpxr\") pod \"glance-operator-controller-manager-5697bb5779-rcr7g\" (UID: \"c60ee2ea-0489-422a-b829-e20040144965\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529132 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529173 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkmr\" (UniqueName: \"kubernetes.io/projected/774ea159-6ac5-4997-8630-db954e22ac28-kube-api-access-gfkmr\") pod \"designate-operator-controller-manager-697fb699cf-t9b58\" (UID: \"774ea159-6ac5-4997-8630-db954e22ac28\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529239 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfrzd\" (UniqueName: \"kubernetes.io/projected/4190e9a5-5bac-4645-a59d-5b4d6308f751-kube-api-access-mfrzd\") pod \"cinder-operator-controller-manager-6c677c69b-jxd72\" (UID: \"4190e9a5-5bac-4645-a59d-5b4d6308f751\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529301 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9fnr\" (UniqueName: \"kubernetes.io/projected/0b0a360a-e011-471c-abfa-6b72d7bf3074-kube-api-access-f9fnr\") pod \"heat-operator-controller-manager-5f64f6f8bb-9q7rj\" (UID: \"0b0a360a-e011-471c-abfa-6b72d7bf3074\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.529989 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.538181 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.554463 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-qcks4" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.560300 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.621941 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.630837 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468mv\" (UniqueName: \"kubernetes.io/projected/e0b99715-61ea-4c11-b1df-814886d310a2-kube-api-access-468mv\") pod \"keystone-operator-controller-manager-7765d96ddf-rcmhf\" (UID: \"e0b99715-61ea-4c11-b1df-814886d310a2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.630930 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw29m\" (UniqueName: \"kubernetes.io/projected/ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e-kube-api-access-rw29m\") pod \"horizon-operator-controller-manager-68c6d99b8f-vf6rm\" (UID: \"ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.630950 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjmzx\" (UniqueName: \"kubernetes.io/projected/9a6da1be-c547-49c2-839c-aa549a5bb32b-kube-api-access-jjmzx\") pod \"barbican-operator-controller-manager-7d9dfd778-hgtz4\" (UID: \"9a6da1be-c547-49c2-839c-aa549a5bb32b\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.630974 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdfj6\" (UniqueName: \"kubernetes.io/projected/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-kube-api-access-zdfj6\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.630999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffpxr\" (UniqueName: \"kubernetes.io/projected/c60ee2ea-0489-422a-b829-e20040144965-kube-api-access-ffpxr\") pod \"glance-operator-controller-manager-5697bb5779-rcr7g\" (UID: \"c60ee2ea-0489-422a-b829-e20040144965\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.631047 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.631075 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfkmr\" (UniqueName: \"kubernetes.io/projected/774ea159-6ac5-4997-8630-db954e22ac28-kube-api-access-gfkmr\") pod \"designate-operator-controller-manager-697fb699cf-t9b58\" (UID: \"774ea159-6ac5-4997-8630-db954e22ac28\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.631111 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdsn\" (UniqueName: \"kubernetes.io/projected/c0d580f7-424c-482e-a3c5-47ef1b9a6b79-kube-api-access-8hdsn\") pod \"ironic-operator-controller-manager-967d97867-dh6nb\" (UID: \"c0d580f7-424c-482e-a3c5-47ef1b9a6b79\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.631144 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfrzd\" (UniqueName: \"kubernetes.io/projected/4190e9a5-5bac-4645-a59d-5b4d6308f751-kube-api-access-mfrzd\") pod \"cinder-operator-controller-manager-6c677c69b-jxd72\" (UID: \"4190e9a5-5bac-4645-a59d-5b4d6308f751\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.631174 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9fnr\" (UniqueName: \"kubernetes.io/projected/0b0a360a-e011-471c-abfa-6b72d7bf3074-kube-api-access-f9fnr\") pod \"heat-operator-controller-manager-5f64f6f8bb-9q7rj\" (UID: \"0b0a360a-e011-471c-abfa-6b72d7bf3074\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:24:49 crc kubenswrapper[4970]: E1209 12:24:49.635362 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:49 crc kubenswrapper[4970]: E1209 12:24:49.635471 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert podName:58aea9ad-c500-4d8b-ae24-72d3b76e2c93 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:50.135449794 +0000 UTC m=+1102.695930845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert") pod "infra-operator-controller-manager-78d48bff9d-smd4b" (UID: "58aea9ad-c500-4d8b-ae24-72d3b76e2c93") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.640361 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6mpjd" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.696692 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffpxr\" (UniqueName: \"kubernetes.io/projected/c60ee2ea-0489-422a-b829-e20040144965-kube-api-access-ffpxr\") pod \"glance-operator-controller-manager-5697bb5779-rcr7g\" (UID: \"c60ee2ea-0489-422a-b829-e20040144965\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.705707 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjmzx\" (UniqueName: \"kubernetes.io/projected/9a6da1be-c547-49c2-839c-aa549a5bb32b-kube-api-access-jjmzx\") pod \"barbican-operator-controller-manager-7d9dfd778-hgtz4\" (UID: \"9a6da1be-c547-49c2-839c-aa549a5bb32b\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.706597 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw29m\" (UniqueName: \"kubernetes.io/projected/ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e-kube-api-access-rw29m\") pod \"horizon-operator-controller-manager-68c6d99b8f-vf6rm\" (UID: \"ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.709623 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9fnr\" (UniqueName: \"kubernetes.io/projected/0b0a360a-e011-471c-abfa-6b72d7bf3074-kube-api-access-f9fnr\") pod \"heat-operator-controller-manager-5f64f6f8bb-9q7rj\" (UID: \"0b0a360a-e011-471c-abfa-6b72d7bf3074\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.714359 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.718134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfkmr\" (UniqueName: \"kubernetes.io/projected/774ea159-6ac5-4997-8630-db954e22ac28-kube-api-access-gfkmr\") pod \"designate-operator-controller-manager-697fb699cf-t9b58\" (UID: \"774ea159-6ac5-4997-8630-db954e22ac28\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.718442 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdfj6\" (UniqueName: \"kubernetes.io/projected/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-kube-api-access-zdfj6\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.719200 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfrzd\" (UniqueName: \"kubernetes.io/projected/4190e9a5-5bac-4645-a59d-5b4d6308f751-kube-api-access-mfrzd\") pod \"cinder-operator-controller-manager-6c677c69b-jxd72\" (UID: \"4190e9a5-5bac-4645-a59d-5b4d6308f751\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.721778 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.732342 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdsn\" (UniqueName: \"kubernetes.io/projected/c0d580f7-424c-482e-a3c5-47ef1b9a6b79-kube-api-access-8hdsn\") pod \"ironic-operator-controller-manager-967d97867-dh6nb\" (UID: \"c0d580f7-424c-482e-a3c5-47ef1b9a6b79\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.732438 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468mv\" (UniqueName: \"kubernetes.io/projected/e0b99715-61ea-4c11-b1df-814886d310a2-kube-api-access-468mv\") pod \"keystone-operator-controller-manager-7765d96ddf-rcmhf\" (UID: \"e0b99715-61ea-4c11-b1df-814886d310a2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.732755 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.753526 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.755206 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.761002 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-x7nhk" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.768289 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.769743 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468mv\" (UniqueName: \"kubernetes.io/projected/e0b99715-61ea-4c11-b1df-814886d310a2-kube-api-access-468mv\") pod \"keystone-operator-controller-manager-7765d96ddf-rcmhf\" (UID: \"e0b99715-61ea-4c11-b1df-814886d310a2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.797274 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.817796 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.829507 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.872621 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.872674 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.887885 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.889054 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.891070 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-9w2ts" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.893304 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.893808 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.895943 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-k8ppn" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.914408 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.920793 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.925133 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.933756 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.941748 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2sf28" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.943667 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.944921 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk87c\" (UniqueName: \"kubernetes.io/projected/3c4587c1-58af-46c3-b886-59f5e44220fb-kube-api-access-wk87c\") pod \"manila-operator-controller-manager-5b5fd79c9c-5sz7k\" (UID: \"3c4587c1-58af-46c3-b886-59f5e44220fb\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.954282 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.956300 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.962726 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdsn\" (UniqueName: \"kubernetes.io/projected/c0d580f7-424c-482e-a3c5-47ef1b9a6b79-kube-api-access-8hdsn\") pod \"ironic-operator-controller-manager-967d97867-dh6nb\" (UID: \"c0d580f7-424c-482e-a3c5-47ef1b9a6b79\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.967501 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dgwlp" Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.983684 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v"] Dec 09 12:24:49 crc kubenswrapper[4970]: I1209 12:24:49.995644 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.003782 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.005449 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.008179 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.014990 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-4z5dh" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.015469 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.019281 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.021562 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-pvp5t" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.035419 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.037239 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.042712 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9djlh" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.044337 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.045955 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nv59\" (UniqueName: \"kubernetes.io/projected/ddb7e093-4817-4ec2-9f81-9779ea2dddc9-kube-api-access-4nv59\") pod \"mariadb-operator-controller-manager-79c8c4686c-rt8fj\" (UID: \"ddb7e093-4817-4ec2-9f81-9779ea2dddc9\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.045987 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twzsw\" (UniqueName: \"kubernetes.io/projected/9b238f80-d845-4791-a567-08f03974f612-kube-api-access-twzsw\") pod \"nova-operator-controller-manager-697bc559fc-4w8vj\" (UID: \"9b238f80-d845-4791-a567-08f03974f612\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.046059 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk87c\" (UniqueName: \"kubernetes.io/projected/3c4587c1-58af-46c3-b886-59f5e44220fb-kube-api-access-wk87c\") pod \"manila-operator-controller-manager-5b5fd79c9c-5sz7k\" (UID: \"3c4587c1-58af-46c3-b886-59f5e44220fb\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.046144 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4mmm\" (UniqueName: \"kubernetes.io/projected/13a1643a-f17b-435a-8ce6-60f253571bf2-kube-api-access-b4mmm\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vb4rz\" (UID: \"13a1643a-f17b-435a-8ce6-60f253571bf2\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.046178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5lc\" (UniqueName: \"kubernetes.io/projected/d0573bd9-628b-42e9-a46c-5c8b47bd977f-kube-api-access-2v5lc\") pod \"octavia-operator-controller-manager-998648c74-pvc2v\" (UID: \"d0573bd9-628b-42e9-a46c-5c8b47bd977f\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.052922 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.069187 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.076800 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.078193 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.086486 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-v6bl6" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.103539 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk87c\" (UniqueName: \"kubernetes.io/projected/3c4587c1-58af-46c3-b886-59f5e44220fb-kube-api-access-wk87c\") pod \"manila-operator-controller-manager-5b5fd79c9c-5sz7k\" (UID: \"3c4587c1-58af-46c3-b886-59f5e44220fb\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.103600 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.104900 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.109771 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.119622 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-jd86g" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148305 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148360 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148405 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjc6\" (UniqueName: \"kubernetes.io/projected/64b64284-dc99-424f-959f-2ed95a4ff4be-kube-api-access-5cjc6\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148444 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jdp\" (UniqueName: \"kubernetes.io/projected/c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7-kube-api-access-l7jdp\") pod \"placement-operator-controller-manager-78f8948974-zhvtf\" (UID: \"c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148471 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4mmm\" (UniqueName: \"kubernetes.io/projected/13a1643a-f17b-435a-8ce6-60f253571bf2-kube-api-access-b4mmm\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vb4rz\" (UID: \"13a1643a-f17b-435a-8ce6-60f253571bf2\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148495 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dzdd\" (UniqueName: \"kubernetes.io/projected/661d70f4-459d-44fe-874b-e24f33654af6-kube-api-access-6dzdd\") pod \"ovn-operator-controller-manager-b6456fdb6-j6znq\" (UID: \"661d70f4-459d-44fe-874b-e24f33654af6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148525 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5lc\" (UniqueName: \"kubernetes.io/projected/d0573bd9-628b-42e9-a46c-5c8b47bd977f-kube-api-access-2v5lc\") pod \"octavia-operator-controller-manager-998648c74-pvc2v\" (UID: \"d0573bd9-628b-42e9-a46c-5c8b47bd977f\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148548 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nv59\" (UniqueName: \"kubernetes.io/projected/ddb7e093-4817-4ec2-9f81-9779ea2dddc9-kube-api-access-4nv59\") pod \"mariadb-operator-controller-manager-79c8c4686c-rt8fj\" (UID: \"ddb7e093-4817-4ec2-9f81-9779ea2dddc9\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.148566 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twzsw\" (UniqueName: \"kubernetes.io/projected/9b238f80-d845-4791-a567-08f03974f612-kube-api-access-twzsw\") pod \"nova-operator-controller-manager-697bc559fc-4w8vj\" (UID: \"9b238f80-d845-4791-a567-08f03974f612\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.149639 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.150745 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.150788 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert podName:58aea9ad-c500-4d8b-ae24-72d3b76e2c93 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:51.150771523 +0000 UTC m=+1103.711252574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert") pod "infra-operator-controller-manager-78d48bff9d-smd4b" (UID: "58aea9ad-c500-4d8b-ae24-72d3b76e2c93") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.182998 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.206365 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.232371 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.234861 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.234894 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.237182 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-6282f" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.237639 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4mmm\" (UniqueName: \"kubernetes.io/projected/13a1643a-f17b-435a-8ce6-60f253571bf2-kube-api-access-b4mmm\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vb4rz\" (UID: \"13a1643a-f17b-435a-8ce6-60f253571bf2\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.250019 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nv59\" (UniqueName: \"kubernetes.io/projected/ddb7e093-4817-4ec2-9f81-9779ea2dddc9-kube-api-access-4nv59\") pod \"mariadb-operator-controller-manager-79c8c4686c-rt8fj\" (UID: \"ddb7e093-4817-4ec2-9f81-9779ea2dddc9\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.250998 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cbqq\" (UniqueName: \"kubernetes.io/projected/8c8f9bbb-5933-4c81-a7ff-db3f5f74835e-kube-api-access-8cbqq\") pod \"telemetry-operator-controller-manager-797ff5dd46-9dm7h\" (UID: \"8c8f9bbb-5933-4c81-a7ff-db3f5f74835e\") " pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.251038 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.251113 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjc6\" (UniqueName: \"kubernetes.io/projected/64b64284-dc99-424f-959f-2ed95a4ff4be-kube-api-access-5cjc6\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.251150 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7jdp\" (UniqueName: \"kubernetes.io/projected/c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7-kube-api-access-l7jdp\") pod \"placement-operator-controller-manager-78f8948974-zhvtf\" (UID: \"c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.251174 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.251221 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert podName:64b64284-dc99-424f-959f-2ed95a4ff4be nodeName:}" failed. No retries permitted until 2025-12-09 12:24:50.751208587 +0000 UTC m=+1103.311689638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8r88z" (UID: "64b64284-dc99-424f-959f-2ed95a4ff4be") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.251177 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dzdd\" (UniqueName: \"kubernetes.io/projected/661d70f4-459d-44fe-874b-e24f33654af6-kube-api-access-6dzdd\") pod \"ovn-operator-controller-manager-b6456fdb6-j6znq\" (UID: \"661d70f4-459d-44fe-874b-e24f33654af6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.251577 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgr7d\" (UniqueName: \"kubernetes.io/projected/5a6d30e8-9d6b-47ef-9c17-351179967d04-kube-api-access-wgr7d\") pod \"swift-operator-controller-manager-9d58d64bc-kg572\" (UID: \"5a6d30e8-9d6b-47ef-9c17-351179967d04\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.260881 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5lc\" (UniqueName: \"kubernetes.io/projected/d0573bd9-628b-42e9-a46c-5c8b47bd977f-kube-api-access-2v5lc\") pod \"octavia-operator-controller-manager-998648c74-pvc2v\" (UID: \"d0573bd9-628b-42e9-a46c-5c8b47bd977f\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.261704 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twzsw\" (UniqueName: \"kubernetes.io/projected/9b238f80-d845-4791-a567-08f03974f612-kube-api-access-twzsw\") pod \"nova-operator-controller-manager-697bc559fc-4w8vj\" (UID: \"9b238f80-d845-4791-a567-08f03974f612\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.382515 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dzdd\" (UniqueName: \"kubernetes.io/projected/661d70f4-459d-44fe-874b-e24f33654af6-kube-api-access-6dzdd\") pod \"ovn-operator-controller-manager-b6456fdb6-j6znq\" (UID: \"661d70f4-459d-44fe-874b-e24f33654af6\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.394116 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqf6p\" (UniqueName: \"kubernetes.io/projected/a8455557-ebef-4da8-af18-aff995d6c3c3-kube-api-access-sqf6p\") pod \"test-operator-controller-manager-5854674fcc-mcq2h\" (UID: \"a8455557-ebef-4da8-af18-aff995d6c3c3\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.394311 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgr7d\" (UniqueName: \"kubernetes.io/projected/5a6d30e8-9d6b-47ef-9c17-351179967d04-kube-api-access-wgr7d\") pod \"swift-operator-controller-manager-9d58d64bc-kg572\" (UID: \"5a6d30e8-9d6b-47ef-9c17-351179967d04\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.394433 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cbqq\" (UniqueName: \"kubernetes.io/projected/8c8f9bbb-5933-4c81-a7ff-db3f5f74835e-kube-api-access-8cbqq\") pod \"telemetry-operator-controller-manager-797ff5dd46-9dm7h\" (UID: \"8c8f9bbb-5933-4c81-a7ff-db3f5f74835e\") " pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.403064 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.412839 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.414420 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7jdp\" (UniqueName: \"kubernetes.io/projected/c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7-kube-api-access-l7jdp\") pod \"placement-operator-controller-manager-78f8948974-zhvtf\" (UID: \"c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.440922 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.456235 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjc6\" (UniqueName: \"kubernetes.io/projected/64b64284-dc99-424f-959f-2ed95a4ff4be-kube-api-access-5cjc6\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.472917 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cbqq\" (UniqueName: \"kubernetes.io/projected/8c8f9bbb-5933-4c81-a7ff-db3f5f74835e-kube-api-access-8cbqq\") pod \"telemetry-operator-controller-manager-797ff5dd46-9dm7h\" (UID: \"8c8f9bbb-5933-4c81-a7ff-db3f5f74835e\") " pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.516702 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgr7d\" (UniqueName: \"kubernetes.io/projected/5a6d30e8-9d6b-47ef-9c17-351179967d04-kube-api-access-wgr7d\") pod \"swift-operator-controller-manager-9d58d64bc-kg572\" (UID: \"5a6d30e8-9d6b-47ef-9c17-351179967d04\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.523581 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.527630 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.530561 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-q2dpp" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.534154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqf6p\" (UniqueName: \"kubernetes.io/projected/a8455557-ebef-4da8-af18-aff995d6c3c3-kube-api-access-sqf6p\") pod \"test-operator-controller-manager-5854674fcc-mcq2h\" (UID: \"a8455557-ebef-4da8-af18-aff995d6c3c3\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.535379 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.549223 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.602597 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqf6p\" (UniqueName: \"kubernetes.io/projected/a8455557-ebef-4da8-af18-aff995d6c3c3-kube-api-access-sqf6p\") pod \"test-operator-controller-manager-5854674fcc-mcq2h\" (UID: \"a8455557-ebef-4da8-af18-aff995d6c3c3\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.618680 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.620159 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.626076 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-tdbrz" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.626676 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.627103 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.630084 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.631957 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.633616 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.642891 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-sg7gs" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.645287 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbmvz\" (UniqueName: \"kubernetes.io/projected/567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4-kube-api-access-hbmvz\") pod \"watcher-operator-controller-manager-667bd8d554-mcv95\" (UID: \"567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.671472 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.672183 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2"] Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.749377 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.749493 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbmvz\" (UniqueName: \"kubernetes.io/projected/567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4-kube-api-access-hbmvz\") pod \"watcher-operator-controller-manager-667bd8d554-mcv95\" (UID: \"567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.749575 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.749601 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjjr\" (UniqueName: \"kubernetes.io/projected/40c6e40b-d51a-482f-b1b7-585a064c9d00-kube-api-access-zcjjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2pdw2\" (UID: \"40c6e40b-d51a-482f-b1b7-585a064c9d00\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.749659 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdtbg\" (UniqueName: \"kubernetes.io/projected/2b6b980c-c0f6-4a0f-a484-63e90086ba35-kube-api-access-hdtbg\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.842558 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbmvz\" (UniqueName: \"kubernetes.io/projected/567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4-kube-api-access-hbmvz\") pod \"watcher-operator-controller-manager-667bd8d554-mcv95\" (UID: \"567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.853362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.853507 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcjjr\" (UniqueName: \"kubernetes.io/projected/40c6e40b-d51a-482f-b1b7-585a064c9d00-kube-api-access-zcjjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2pdw2\" (UID: \"40c6e40b-d51a-482f-b1b7-585a064c9d00\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.853525 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.853544 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.853584 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdtbg\" (UniqueName: \"kubernetes.io/projected/2b6b980c-c0f6-4a0f-a484-63e90086ba35-kube-api-access-hdtbg\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.853891 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.854343 4970 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.854384 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:51.354369966 +0000 UTC m=+1103.914851017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "metrics-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.854626 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.854650 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:51.354643403 +0000 UTC m=+1103.915124454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.854683 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: E1209 12:24:50.854703 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert podName:64b64284-dc99-424f-959f-2ed95a4ff4be nodeName:}" failed. No retries permitted until 2025-12-09 12:24:51.854696725 +0000 UTC m=+1104.415177776 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8r88z" (UID: "64b64284-dc99-424f-959f-2ed95a4ff4be") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.901996 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdtbg\" (UniqueName: \"kubernetes.io/projected/2b6b980c-c0f6-4a0f-a484-63e90086ba35-kube-api-access-hdtbg\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:50 crc kubenswrapper[4970]: I1209 12:24:50.936952 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcjjr\" (UniqueName: \"kubernetes.io/projected/40c6e40b-d51a-482f-b1b7-585a064c9d00-kube-api-access-zcjjr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2pdw2\" (UID: \"40c6e40b-d51a-482f-b1b7-585a064c9d00\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.123197 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.136200 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.154076 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.167916 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.172326 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.173120 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.173168 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert podName:58aea9ad-c500-4d8b-ae24-72d3b76e2c93 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:53.173151535 +0000 UTC m=+1105.733632586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert") pod "infra-operator-controller-manager-78d48bff9d-smd4b" (UID: "58aea9ad-c500-4d8b-ae24-72d3b76e2c93") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.221671 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.290692 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4"] Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.376119 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.376206 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.376395 4970 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.376440 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:52.376428259 +0000 UTC m=+1104.936909310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "metrics-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.376715 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.376738 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:52.376730557 +0000 UTC m=+1104.937211608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "webhook-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.902185 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.904020 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: E1209 12:24:51.904078 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert podName:64b64284-dc99-424f-959f-2ed95a4ff4be nodeName:}" failed. No retries permitted until 2025-12-09 12:24:53.904051434 +0000 UTC m=+1106.464532585 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8r88z" (UID: "64b64284-dc99-424f-959f-2ed95a4ff4be") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:51 crc kubenswrapper[4970]: I1209 12:24:51.996725 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" event={"ID":"9a6da1be-c547-49c2-839c-aa549a5bb32b","Type":"ContainerStarted","Data":"2274314c5be8abbfbbe27ddb3ddb3dc3be608fa0322e15f95c96563833bd234f"} Dec 09 12:24:52 crc kubenswrapper[4970]: I1209 12:24:52.428161 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:52 crc kubenswrapper[4970]: I1209 12:24:52.428659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:52 crc kubenswrapper[4970]: E1209 12:24:52.428835 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:24:52 crc kubenswrapper[4970]: E1209 12:24:52.428896 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:54.428878093 +0000 UTC m=+1106.989359144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "webhook-server-cert" not found Dec 09 12:24:52 crc kubenswrapper[4970]: E1209 12:24:52.429184 4970 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:24:52 crc kubenswrapper[4970]: E1209 12:24:52.429272 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:54.429237433 +0000 UTC m=+1106.989718484 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "metrics-server-cert" not found Dec 09 12:24:52 crc kubenswrapper[4970]: I1209 12:24:52.441791 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj"] Dec 09 12:24:52 crc kubenswrapper[4970]: I1209 12:24:52.533067 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf"] Dec 09 12:24:52 crc kubenswrapper[4970]: I1209 12:24:52.649316 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58"] Dec 09 12:24:52 crc kubenswrapper[4970]: I1209 12:24:52.669705 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.049820 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.065096 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" event={"ID":"774ea159-6ac5-4997-8630-db954e22ac28","Type":"ContainerStarted","Data":"191922e1b8d4532c0fde321a8bed43c5681dd42f77aea7e37bf2898258b86565"} Dec 09 12:24:53 crc kubenswrapper[4970]: W1209 12:24:53.088803 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13a1643a_f17b_435a_8ce6_60f253571bf2.slice/crio-abbad23a89761ec8fa825c0b4ef36ff12ebcacdd49adbff699c1e66a90ce8b85 WatchSource:0}: Error finding container abbad23a89761ec8fa825c0b4ef36ff12ebcacdd49adbff699c1e66a90ce8b85: Status 404 returned error can't find the container with id abbad23a89761ec8fa825c0b4ef36ff12ebcacdd49adbff699c1e66a90ce8b85 Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.094802 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.094841 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" event={"ID":"ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e","Type":"ContainerStarted","Data":"0a9be7329c6d52e5b14c08b7d3bfe427ac07975588836de22d4d742233059f2c"} Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.099834 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" event={"ID":"e0b99715-61ea-4c11-b1df-814886d310a2","Type":"ContainerStarted","Data":"697431ae9ad6e1657d604ad8af7bdf6008057a428e1614895ce15488e8d14ec9"} Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.100255 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb"] Dec 09 12:24:53 crc kubenswrapper[4970]: W1209 12:24:53.123729 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0d580f7_424c_482e_a3c5_47ef1b9a6b79.slice/crio-e433b294b6c82db208dc625a7c3bc01119980b2cb099ed720368f4ef6e502915 WatchSource:0}: Error finding container e433b294b6c82db208dc625a7c3bc01119980b2cb099ed720368f4ef6e502915: Status 404 returned error can't find the container with id e433b294b6c82db208dc625a7c3bc01119980b2cb099ed720368f4ef6e502915 Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.124875 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" event={"ID":"0b0a360a-e011-471c-abfa-6b72d7bf3074","Type":"ContainerStarted","Data":"4239211d44ad1a852e8de8b356a2567976d8b92cc1ff826db48f3abe0117b7fc"} Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.131037 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.176842 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.176989 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.177047 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert podName:58aea9ad-c500-4d8b-ae24-72d3b76e2c93 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:57.177033424 +0000 UTC m=+1109.737514475 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert") pod "infra-operator-controller-manager-78d48bff9d-smd4b" (UID: "58aea9ad-c500-4d8b-ae24-72d3b76e2c93") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.181978 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.207068 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.212775 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g"] Dec 09 12:24:53 crc kubenswrapper[4970]: W1209 12:24:53.223538 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b238f80_d845_4791_a567_08f03974f612.slice/crio-e8aa475526241ce1a12c04085c63c36ea9be068fdefa2a3e67539a96016baf6b WatchSource:0}: Error finding container e8aa475526241ce1a12c04085c63c36ea9be068fdefa2a3e67539a96016baf6b: Status 404 returned error can't find the container with id e8aa475526241ce1a12c04085c63c36ea9be068fdefa2a3e67539a96016baf6b Dec 09 12:24:53 crc kubenswrapper[4970]: W1209 12:24:53.233944 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc60ee2ea_0489_422a_b829_e20040144965.slice/crio-52577e74f7d4e0d71e43e29efc8d4fe0a9f3e8189013713f61808d0f93c96834 WatchSource:0}: Error finding container 52577e74f7d4e0d71e43e29efc8d4fe0a9f3e8189013713f61808d0f93c96834: Status 404 returned error can't find the container with id 52577e74f7d4e0d71e43e29efc8d4fe0a9f3e8189013713f61808d0f93c96834 Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.260700 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq"] Dec 09 12:24:53 crc kubenswrapper[4970]: W1209 12:24:53.316425 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod661d70f4_459d_44fe_874b_e24f33654af6.slice/crio-803248d535a71c160bc6d55e4a5c73ac8fe64e2c920a7f47f9999fbf8483958f WatchSource:0}: Error finding container 803248d535a71c160bc6d55e4a5c73ac8fe64e2c920a7f47f9999fbf8483958f: Status 404 returned error can't find the container with id 803248d535a71c160bc6d55e4a5c73ac8fe64e2c920a7f47f9999fbf8483958f Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.734157 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.748289 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.757127 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.768627 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2"] Dec 09 12:24:53 crc kubenswrapper[4970]: W1209 12:24:53.769971 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a6d30e8_9d6b_47ef_9c17_351179967d04.slice/crio-9375623166b9056f2e38122716189eb1cdd0c2830cb402b76163e85f362ec9e5 WatchSource:0}: Error finding container 9375623166b9056f2e38122716189eb1cdd0c2830cb402b76163e85f362ec9e5: Status 404 returned error can't find the container with id 9375623166b9056f2e38122716189eb1cdd0c2830cb402b76163e85f362ec9e5 Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.776101 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h"] Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.797201 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcjjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2pdw2_openstack-operators(40c6e40b-d51a-482f-b1b7-585a064c9d00): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.798671 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" podUID="40c6e40b-d51a-482f-b1b7-585a064c9d00" Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.903867 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v"] Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.923341 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.923814 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.923906 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert podName:64b64284-dc99-424f-959f-2ed95a4ff4be nodeName:}" failed. No retries permitted until 2025-12-09 12:24:57.923883009 +0000 UTC m=+1110.484364110 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8r88z" (UID: "64b64284-dc99-424f-959f-2ed95a4ff4be") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:53 crc kubenswrapper[4970]: I1209 12:24:53.934217 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95"] Dec 09 12:24:53 crc kubenswrapper[4970]: E1209 12:24:53.982468 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbmvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-mcv95_openstack-operators(567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.004609 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hbmvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-mcv95_openstack-operators(567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.008003 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" podUID="567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4" Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.149639 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" event={"ID":"5a6d30e8-9d6b-47ef-9c17-351179967d04","Type":"ContainerStarted","Data":"9375623166b9056f2e38122716189eb1cdd0c2830cb402b76163e85f362ec9e5"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.164537 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" event={"ID":"ddb7e093-4817-4ec2-9f81-9779ea2dddc9","Type":"ContainerStarted","Data":"5ddef05de812d01bdca6fc70a70cb19ca772fa54601998138eb57709dc59442f"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.170914 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" event={"ID":"c0d580f7-424c-482e-a3c5-47ef1b9a6b79","Type":"ContainerStarted","Data":"e433b294b6c82db208dc625a7c3bc01119980b2cb099ed720368f4ef6e502915"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.175932 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" event={"ID":"40c6e40b-d51a-482f-b1b7-585a064c9d00","Type":"ContainerStarted","Data":"9eac61e86e3c91c3cff8b76284425e9659262d218bcbd5bb47ef96e77dffc839"} Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.179597 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" podUID="40c6e40b-d51a-482f-b1b7-585a064c9d00" Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.194115 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" event={"ID":"661d70f4-459d-44fe-874b-e24f33654af6","Type":"ContainerStarted","Data":"803248d535a71c160bc6d55e4a5c73ac8fe64e2c920a7f47f9999fbf8483958f"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.200010 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" event={"ID":"13a1643a-f17b-435a-8ce6-60f253571bf2","Type":"ContainerStarted","Data":"abbad23a89761ec8fa825c0b4ef36ff12ebcacdd49adbff699c1e66a90ce8b85"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.238315 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" event={"ID":"4190e9a5-5bac-4645-a59d-5b4d6308f751","Type":"ContainerStarted","Data":"91b4ec6a9d7efdaf985b0d937ab98be32916211f1f61c27f217c726d78563600"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.246484 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" event={"ID":"3c4587c1-58af-46c3-b886-59f5e44220fb","Type":"ContainerStarted","Data":"1294829c8d325ab9f16217f76c8293cf89c6c68db2aa2d8f04fa2e454122260e"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.252679 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" event={"ID":"9b238f80-d845-4791-a567-08f03974f612","Type":"ContainerStarted","Data":"e8aa475526241ce1a12c04085c63c36ea9be068fdefa2a3e67539a96016baf6b"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.257024 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" event={"ID":"d0573bd9-628b-42e9-a46c-5c8b47bd977f","Type":"ContainerStarted","Data":"567de549ee5e2861881fb787a926717f7a9e9ef626ce19af6987d6c9e34a08dd"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.273099 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" event={"ID":"c60ee2ea-0489-422a-b829-e20040144965","Type":"ContainerStarted","Data":"52577e74f7d4e0d71e43e29efc8d4fe0a9f3e8189013713f61808d0f93c96834"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.276728 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" event={"ID":"c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7","Type":"ContainerStarted","Data":"07b3f3379d3a04a9cc80dff70cf9ae87a36b42786daaf48f1b708cb24bfa7280"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.280231 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" event={"ID":"567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4","Type":"ContainerStarted","Data":"4c6a18666810ab222de2d7a70e1d1f2aefd76f303b46be2bd2b209e659d089a8"} Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.284999 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" podUID="567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4" Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.287575 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" event={"ID":"8c8f9bbb-5933-4c81-a7ff-db3f5f74835e","Type":"ContainerStarted","Data":"272d6d7e9de232b92194a37c4f6bfc1d172214121ca37177e953e857c44c6a3d"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.304666 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" event={"ID":"a8455557-ebef-4da8-af18-aff995d6c3c3","Type":"ContainerStarted","Data":"c446e430d42d9bdf34e42706117c61b387cd2d3aa40da250939114dd0f9c9a3a"} Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.435866 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:54 crc kubenswrapper[4970]: I1209 12:24:54.436042 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.436262 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.436319 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:58.436300748 +0000 UTC m=+1110.996781799 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "webhook-server-cert" not found Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.436712 4970 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:24:54 crc kubenswrapper[4970]: E1209 12:24:54.436749 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:24:58.43673924 +0000 UTC m=+1110.997220301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "metrics-server-cert" not found Dec 09 12:24:55 crc kubenswrapper[4970]: E1209 12:24:55.336369 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" podUID="40c6e40b-d51a-482f-b1b7-585a064c9d00" Dec 09 12:24:55 crc kubenswrapper[4970]: E1209 12:24:55.337656 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" podUID="567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4" Dec 09 12:24:57 crc kubenswrapper[4970]: I1209 12:24:57.212012 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:24:57 crc kubenswrapper[4970]: E1209 12:24:57.212295 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:57 crc kubenswrapper[4970]: E1209 12:24:57.212344 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert podName:58aea9ad-c500-4d8b-ae24-72d3b76e2c93 nodeName:}" failed. No retries permitted until 2025-12-09 12:25:05.212329753 +0000 UTC m=+1117.772810804 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert") pod "infra-operator-controller-manager-78d48bff9d-smd4b" (UID: "58aea9ad-c500-4d8b-ae24-72d3b76e2c93") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:24:57 crc kubenswrapper[4970]: I1209 12:24:57.924714 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:24:57 crc kubenswrapper[4970]: E1209 12:24:57.926018 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:57 crc kubenswrapper[4970]: E1209 12:24:57.926063 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert podName:64b64284-dc99-424f-959f-2ed95a4ff4be nodeName:}" failed. No retries permitted until 2025-12-09 12:25:05.926051423 +0000 UTC m=+1118.486532474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8r88z" (UID: "64b64284-dc99-424f-959f-2ed95a4ff4be") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:24:58 crc kubenswrapper[4970]: I1209 12:24:58.536558 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:58 crc kubenswrapper[4970]: I1209 12:24:58.536676 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:24:58 crc kubenswrapper[4970]: E1209 12:24:58.536765 4970 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:24:58 crc kubenswrapper[4970]: E1209 12:24:58.536807 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:24:58 crc kubenswrapper[4970]: E1209 12:24:58.536875 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:25:06.53684444 +0000 UTC m=+1119.097325531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "metrics-server-cert" not found Dec 09 12:24:58 crc kubenswrapper[4970]: E1209 12:24:58.536907 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:25:06.536893851 +0000 UTC m=+1119.097374932 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "webhook-server-cert" not found Dec 09 12:25:05 crc kubenswrapper[4970]: I1209 12:25:05.267782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:25:05 crc kubenswrapper[4970]: E1209 12:25:05.267970 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 09 12:25:05 crc kubenswrapper[4970]: E1209 12:25:05.268557 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert podName:58aea9ad-c500-4d8b-ae24-72d3b76e2c93 nodeName:}" failed. No retries permitted until 2025-12-09 12:25:21.268537357 +0000 UTC m=+1133.829018418 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert") pod "infra-operator-controller-manager-78d48bff9d-smd4b" (UID: "58aea9ad-c500-4d8b-ae24-72d3b76e2c93") : secret "infra-operator-webhook-server-cert" not found Dec 09 12:25:06 crc kubenswrapper[4970]: I1209 12:25:06.022485 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:25:06 crc kubenswrapper[4970]: E1209 12:25:06.022736 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:25:06 crc kubenswrapper[4970]: E1209 12:25:06.023008 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert podName:64b64284-dc99-424f-959f-2ed95a4ff4be nodeName:}" failed. No retries permitted until 2025-12-09 12:25:22.02298249 +0000 UTC m=+1134.583463541 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8r88z" (UID: "64b64284-dc99-424f-959f-2ed95a4ff4be") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 12:25:06 crc kubenswrapper[4970]: I1209 12:25:06.633519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:06 crc kubenswrapper[4970]: I1209 12:25:06.633918 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:06 crc kubenswrapper[4970]: E1209 12:25:06.634081 4970 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 09 12:25:06 crc kubenswrapper[4970]: E1209 12:25:06.634131 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:25:22.634112877 +0000 UTC m=+1135.194593928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "metrics-server-cert" not found Dec 09 12:25:06 crc kubenswrapper[4970]: E1209 12:25:06.634461 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 12:25:06 crc kubenswrapper[4970]: E1209 12:25:06.634543 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs podName:2b6b980c-c0f6-4a0f-a484-63e90086ba35 nodeName:}" failed. No retries permitted until 2025-12-09 12:25:22.634526058 +0000 UTC m=+1135.195007109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs") pod "openstack-operator-controller-manager-586c894b5-qkxjg" (UID: "2b6b980c-c0f6-4a0f-a484-63e90086ba35") : secret "webhook-server-cert" not found Dec 09 12:25:08 crc kubenswrapper[4970]: E1209 12:25:08.359824 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 09 12:25:08 crc kubenswrapper[4970]: E1209 12:25:08.360294 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b4mmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-vb4rz_openstack-operators(13a1643a-f17b-435a-8ce6-60f253571bf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:09 crc kubenswrapper[4970]: E1209 12:25:09.300952 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 09 12:25:09 crc kubenswrapper[4970]: E1209 12:25:09.301183 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4nv59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-rt8fj_openstack-operators(ddb7e093-4817-4ec2-9f81-9779ea2dddc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:10 crc kubenswrapper[4970]: E1209 12:25:10.031621 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 09 12:25:10 crc kubenswrapper[4970]: E1209 12:25:10.032064 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dzdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-j6znq_openstack-operators(661d70f4-459d-44fe-874b-e24f33654af6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:10 crc kubenswrapper[4970]: E1209 12:25:10.779641 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027" Dec 09 12:25:10 crc kubenswrapper[4970]: E1209 12:25:10.779808 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ffpxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-rcr7g_openstack-operators(c60ee2ea-0489-422a-b829-e20040144965): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:15 crc kubenswrapper[4970]: E1209 12:25:15.650387 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 09 12:25:15 crc kubenswrapper[4970]: E1209 12:25:15.651037 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wk87c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-5sz7k_openstack-operators(3c4587c1-58af-46c3-b886-59f5e44220fb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:16 crc kubenswrapper[4970]: I1209 12:25:16.011305 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:25:16 crc kubenswrapper[4970]: I1209 12:25:16.011358 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:25:17 crc kubenswrapper[4970]: E1209 12:25:17.449282 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 09 12:25:17 crc kubenswrapper[4970]: E1209 12:25:17.449712 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f9fnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-9q7rj_openstack-operators(0b0a360a-e011-471c-abfa-6b72d7bf3074): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:17 crc kubenswrapper[4970]: E1209 12:25:17.994110 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 09 12:25:17 crc kubenswrapper[4970]: E1209 12:25:17.994312 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rw29m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-vf6rm_openstack-operators(ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:19 crc kubenswrapper[4970]: E1209 12:25:19.156584 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 09 12:25:19 crc kubenswrapper[4970]: E1209 12:25:19.156802 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgr7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-kg572_openstack-operators(5a6d30e8-9d6b-47ef-9c17-351179967d04): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:19 crc kubenswrapper[4970]: E1209 12:25:19.802792 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 09 12:25:19 crc kubenswrapper[4970]: E1209 12:25:19.803352 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqf6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-mcq2h_openstack-operators(a8455557-ebef-4da8-af18-aff995d6c3c3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:20 crc kubenswrapper[4970]: E1209 12:25:20.685614 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 09 12:25:20 crc kubenswrapper[4970]: E1209 12:25:20.685805 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2v5lc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-pvc2v_openstack-operators(d0573bd9-628b-42e9-a46c-5c8b47bd977f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:21 crc kubenswrapper[4970]: E1209 12:25:21.155758 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 09 12:25:21 crc kubenswrapper[4970]: E1209 12:25:21.155946 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7jdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-zhvtf_openstack-operators(c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:21 crc kubenswrapper[4970]: I1209 12:25:21.316136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:25:21 crc kubenswrapper[4970]: I1209 12:25:21.322406 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58aea9ad-c500-4d8b-ae24-72d3b76e2c93-cert\") pod \"infra-operator-controller-manager-78d48bff9d-smd4b\" (UID: \"58aea9ad-c500-4d8b-ae24-72d3b76e2c93\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:25:21 crc kubenswrapper[4970]: I1209 12:25:21.360190 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:25:22 crc kubenswrapper[4970]: E1209 12:25:22.024474 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 09 12:25:22 crc kubenswrapper[4970]: E1209 12:25:22.024680 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-468mv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-rcmhf_openstack-operators(e0b99715-61ea-4c11-b1df-814886d310a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.029186 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.036055 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/64b64284-dc99-424f-959f-2ed95a4ff4be-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8r88z\" (UID: \"64b64284-dc99-424f-959f-2ed95a4ff4be\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.267016 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.643471 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.643646 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.650161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-metrics-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.653171 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/2b6b980c-c0f6-4a0f-a484-63e90086ba35-webhook-certs\") pod \"openstack-operator-controller-manager-586c894b5-qkxjg\" (UID: \"2b6b980c-c0f6-4a0f-a484-63e90086ba35\") " pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:22 crc kubenswrapper[4970]: I1209 12:25:22.696807 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:24 crc kubenswrapper[4970]: E1209 12:25:24.788939 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 09 12:25:24 crc kubenswrapper[4970]: E1209 12:25:24.789519 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twzsw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-4w8vj_openstack-operators(9b238f80-d845-4791-a567-08f03974f612): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:24 crc kubenswrapper[4970]: E1209 12:25:24.863579 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f" Dec 09 12:25:24 crc kubenswrapper[4970]: E1209 12:25:24.863651 4970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f" Dec 09 12:25:24 crc kubenswrapper[4970]: E1209 12:25:24.863815 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.142:5001/openstack-k8s-operators/telemetry-operator:d3ea47b1122f22fdda4bc30dd95b8db90981973f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cbqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-797ff5dd46-9dm7h_openstack-operators(8c8f9bbb-5933-4c81-a7ff-db3f5f74835e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:25 crc kubenswrapper[4970]: E1209 12:25:25.288362 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 09 12:25:25 crc kubenswrapper[4970]: E1209 12:25:25.288672 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcjjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2pdw2_openstack-operators(40c6e40b-d51a-482f-b1b7-585a064c9d00): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:25 crc kubenswrapper[4970]: E1209 12:25:25.291190 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" podUID="40c6e40b-d51a-482f-b1b7-585a064c9d00" Dec 09 12:25:26 crc kubenswrapper[4970]: I1209 12:25:26.689017 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg"] Dec 09 12:25:26 crc kubenswrapper[4970]: I1209 12:25:26.710361 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z"] Dec 09 12:25:26 crc kubenswrapper[4970]: I1209 12:25:26.838860 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b"] Dec 09 12:25:27 crc kubenswrapper[4970]: W1209 12:25:27.143492 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b6b980c_c0f6_4a0f_a484_63e90086ba35.slice/crio-2612451d45eedb7dac4c6bc89e0d97300ba5b7875d3594aa76c49456f5ba1bbe WatchSource:0}: Error finding container 2612451d45eedb7dac4c6bc89e0d97300ba5b7875d3594aa76c49456f5ba1bbe: Status 404 returned error can't find the container with id 2612451d45eedb7dac4c6bc89e0d97300ba5b7875d3594aa76c49456f5ba1bbe Dec 09 12:25:27 crc kubenswrapper[4970]: W1209 12:25:27.144313 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64b64284_dc99_424f_959f_2ed95a4ff4be.slice/crio-933133be219eb2756ac69e2a366edf026518f101c0430d24192ae90f148910ed WatchSource:0}: Error finding container 933133be219eb2756ac69e2a366edf026518f101c0430d24192ae90f148910ed: Status 404 returned error can't find the container with id 933133be219eb2756ac69e2a366edf026518f101c0430d24192ae90f148910ed Dec 09 12:25:27 crc kubenswrapper[4970]: W1209 12:25:27.145605 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58aea9ad_c500_4d8b_ae24_72d3b76e2c93.slice/crio-df1d570bb8161cc53cbb9c95416058c2367c487b3fd84c652fa673165c5f8426 WatchSource:0}: Error finding container df1d570bb8161cc53cbb9c95416058c2367c487b3fd84c652fa673165c5f8426: Status 404 returned error can't find the container with id df1d570bb8161cc53cbb9c95416058c2367c487b3fd84c652fa673165c5f8426 Dec 09 12:25:27 crc kubenswrapper[4970]: I1209 12:25:27.664569 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" event={"ID":"58aea9ad-c500-4d8b-ae24-72d3b76e2c93","Type":"ContainerStarted","Data":"df1d570bb8161cc53cbb9c95416058c2367c487b3fd84c652fa673165c5f8426"} Dec 09 12:25:27 crc kubenswrapper[4970]: I1209 12:25:27.665869 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" event={"ID":"64b64284-dc99-424f-959f-2ed95a4ff4be","Type":"ContainerStarted","Data":"933133be219eb2756ac69e2a366edf026518f101c0430d24192ae90f148910ed"} Dec 09 12:25:27 crc kubenswrapper[4970]: I1209 12:25:27.667154 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" event={"ID":"2b6b980c-c0f6-4a0f-a484-63e90086ba35","Type":"ContainerStarted","Data":"2612451d45eedb7dac4c6bc89e0d97300ba5b7875d3594aa76c49456f5ba1bbe"} Dec 09 12:25:31 crc kubenswrapper[4970]: I1209 12:25:31.701871 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" event={"ID":"4190e9a5-5bac-4645-a59d-5b4d6308f751","Type":"ContainerStarted","Data":"42db53e92e99b66efed9b626532e4fbe6a88706729773b1ba96087d6c8baffdc"} Dec 09 12:25:33 crc kubenswrapper[4970]: I1209 12:25:33.718153 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" event={"ID":"9a6da1be-c547-49c2-839c-aa549a5bb32b","Type":"ContainerStarted","Data":"ed9e2b106e89fefe59ad63c0c889be3a4c85e9463666b3a3e27ef408d181743b"} Dec 09 12:25:37 crc kubenswrapper[4970]: E1209 12:25:37.820688 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" podUID="40c6e40b-d51a-482f-b1b7-585a064c9d00" Dec 09 12:25:38 crc kubenswrapper[4970]: E1209 12:25:38.102472 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:25:38 crc kubenswrapper[4970]: E1209 12:25:38.102674 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l7jdp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-zhvtf_openstack-operators(c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 12:25:38 crc kubenswrapper[4970]: E1209 12:25:38.103914 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" podUID="c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7" Dec 09 12:25:38 crc kubenswrapper[4970]: I1209 12:25:38.786569 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" event={"ID":"567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4","Type":"ContainerStarted","Data":"9b5fe75212d5bb8dcd7a2b32af336ae05e616af91dc39126089b4d43fdb3949b"} Dec 09 12:25:38 crc kubenswrapper[4970]: I1209 12:25:38.798700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" event={"ID":"c0d580f7-424c-482e-a3c5-47ef1b9a6b79","Type":"ContainerStarted","Data":"8f83b9047929e98aa2d6368bafab4599dc788e178d9b5652d2fb72a464429816"} Dec 09 12:25:38 crc kubenswrapper[4970]: I1209 12:25:38.826283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" event={"ID":"774ea159-6ac5-4997-8630-db954e22ac28","Type":"ContainerStarted","Data":"2a6585be817360a61071bd3e87261e74aaf4e2c28241a3363bd53547b89e7198"} Dec 09 12:25:40 crc kubenswrapper[4970]: I1209 12:25:40.028076 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" event={"ID":"2b6b980c-c0f6-4a0f-a484-63e90086ba35","Type":"ContainerStarted","Data":"a57329b0eca3bb73763864b30728ed53195f8e434c88bc87d8b49343b5caceca"} Dec 09 12:25:40 crc kubenswrapper[4970]: I1209 12:25:40.028945 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:40 crc kubenswrapper[4970]: I1209 12:25:40.059869 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" podStartSLOduration=50.059851148 podStartE2EDuration="50.059851148s" podCreationTimestamp="2025-12-09 12:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:25:40.059783556 +0000 UTC m=+1152.620264607" watchObservedRunningTime="2025-12-09 12:25:40.059851148 +0000 UTC m=+1152.620332199" Dec 09 12:25:40 crc kubenswrapper[4970]: E1209 12:25:40.902517 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:25:40 crc kubenswrapper[4970]: E1209 12:25:40.902931 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqf6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-mcq2h_openstack-operators(a8455557-ebef-4da8-af18-aff995d6c3c3): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Dec 09 12:25:40 crc kubenswrapper[4970]: E1209 12:25:40.904114 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" podUID="a8455557-ebef-4da8-af18-aff995d6c3c3" Dec 09 12:25:41 crc kubenswrapper[4970]: E1209 12:25:41.632367 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\": context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:25:41 crc kubenswrapper[4970]: E1209 12:25:41.632513 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b4mmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-vb4rz_openstack-operators(13a1643a-f17b-435a-8ce6-60f253571bf2): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\": context canceled" logger="UnhandledError" Dec 09 12:25:41 crc kubenswrapper[4970]: E1209 12:25:41.633649 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce: Get \\\"https://quay.io/v2/openstack-k8s-operators/kube-rbac-proxy/blobs/sha256:4fa131a1b726b2d6468d461e7d8867a2157d5671f712461d8abd126155fdf9ce\\\": context canceled\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" podUID="13a1643a-f17b-435a-8ce6-60f253571bf2" Dec 09 12:25:43 crc kubenswrapper[4970]: E1209 12:25:43.585191 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 09 12:25:43 crc kubenswrapper[4970]: E1209 12:25:43.585659 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8cbqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-797ff5dd46-9dm7h_openstack-operators(8c8f9bbb-5933-4c81-a7ff-db3f5f74835e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:25:43 crc kubenswrapper[4970]: E1209 12:25:43.586815 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" podUID="8c8f9bbb-5933-4c81-a7ff-db3f5f74835e" Dec 09 12:25:44 crc kubenswrapper[4970]: E1209 12:25:44.793646 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" podUID="ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e" Dec 09 12:25:44 crc kubenswrapper[4970]: E1209 12:25:44.872596 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" podUID="5a6d30e8-9d6b-47ef-9c17-351179967d04" Dec 09 12:25:44 crc kubenswrapper[4970]: E1209 12:25:44.927357 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" podUID="c60ee2ea-0489-422a-b829-e20040144965" Dec 09 12:25:44 crc kubenswrapper[4970]: E1209 12:25:44.951266 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" podUID="661d70f4-459d-44fe-874b-e24f33654af6" Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.111612 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" event={"ID":"661d70f4-459d-44fe-874b-e24f33654af6","Type":"ContainerStarted","Data":"9aae15f7580840768d8569ed0d1f4e64ab75ac9a64dd9dbc1661840bbfc954b0"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.138866 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" event={"ID":"c60ee2ea-0489-422a-b829-e20040144965","Type":"ContainerStarted","Data":"c2bf653da1c406a9cf7bcc93b8697da171e6a00f9cbce38a3b4959feff09d092"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.185160 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" event={"ID":"9a6da1be-c547-49c2-839c-aa549a5bb32b","Type":"ContainerStarted","Data":"7924e8ebac8cd27eca74bbb7252222b6148c55cd61c6a667eeb87cf44e6a196f"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.186351 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.189155 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.214750 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" event={"ID":"a8455557-ebef-4da8-af18-aff995d6c3c3","Type":"ContainerStarted","Data":"9dd2454ea421d5e6ef8227762644e143e6ecd952617ad1323d65a60f59ef62bf"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.244229 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" event={"ID":"ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e","Type":"ContainerStarted","Data":"2361fd03c2fee7b12f01314cefdb977eee7c527c5e8d3710cb4e47f636781276"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.263134 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" event={"ID":"5a6d30e8-9d6b-47ef-9c17-351179967d04","Type":"ContainerStarted","Data":"840db226894ff0b3616dda58c7f50f712ec5274514a35c6ae2683e492ff17f1a"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.266541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" event={"ID":"13a1643a-f17b-435a-8ce6-60f253571bf2","Type":"ContainerStarted","Data":"3d5a044478c67c5932aa760235cb76ae3dccfe27dc3eb32424116b014f8166c1"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.289302 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-hgtz4" podStartSLOduration=3.3363849549999998 podStartE2EDuration="56.289284381s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:51.358414867 +0000 UTC m=+1103.918895918" lastFinishedPulling="2025-12-09 12:25:44.311314293 +0000 UTC m=+1156.871795344" observedRunningTime="2025-12-09 12:25:45.288580672 +0000 UTC m=+1157.849061723" watchObservedRunningTime="2025-12-09 12:25:45.289284381 +0000 UTC m=+1157.849765432" Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.301274 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" event={"ID":"c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7","Type":"ContainerStarted","Data":"9511bde7fd2f7a9aeab93a241bea7c08f377256dd4a74270b5ad5fc84b3fc5f0"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.353530 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" event={"ID":"4190e9a5-5bac-4645-a59d-5b4d6308f751","Type":"ContainerStarted","Data":"fe61a9dc1d1917a287fc2d553cbe82e73278e41ad0b66cbb1a5b92dc6a5d8ffd"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.354551 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.369162 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.382624 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" event={"ID":"58aea9ad-c500-4d8b-ae24-72d3b76e2c93","Type":"ContainerStarted","Data":"e8143836c71c3319b0fd2b1517cdd34af950df91134add7729f1154e4dd7bdb1"} Dec 09 12:25:45 crc kubenswrapper[4970]: I1209 12:25:45.486787 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-jxd72" podStartSLOduration=5.402926996 podStartE2EDuration="56.486760407s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.225905049 +0000 UTC m=+1105.786386100" lastFinishedPulling="2025-12-09 12:25:44.30973846 +0000 UTC m=+1156.870219511" observedRunningTime="2025-12-09 12:25:45.406592447 +0000 UTC m=+1157.967073498" watchObservedRunningTime="2025-12-09 12:25:45.486760407 +0000 UTC m=+1158.047241468" Dec 09 12:25:45 crc kubenswrapper[4970]: E1209 12:25:45.913663 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" podUID="ddb7e093-4817-4ec2-9f81-9779ea2dddc9" Dec 09 12:25:45 crc kubenswrapper[4970]: E1209 12:25:45.926966 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" podUID="d0573bd9-628b-42e9-a46c-5c8b47bd977f" Dec 09 12:25:45 crc kubenswrapper[4970]: E1209 12:25:45.927068 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" podUID="3c4587c1-58af-46c3-b886-59f5e44220fb" Dec 09 12:25:45 crc kubenswrapper[4970]: E1209 12:25:45.938451 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" podUID="0b0a360a-e011-471c-abfa-6b72d7bf3074" Dec 09 12:25:45 crc kubenswrapper[4970]: E1209 12:25:45.939367 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" podUID="9b238f80-d845-4791-a567-08f03974f612" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.010554 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.010629 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.010678 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.011428 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"956b314977002e8d06761bbcdccd0bb4775a0aa2c665b4316e98475f27106ef3"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.011485 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://956b314977002e8d06761bbcdccd0bb4775a0aa2c665b4316e98475f27106ef3" gracePeriod=600 Dec 09 12:25:46 crc kubenswrapper[4970]: E1209 12:25:46.037542 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" podUID="e0b99715-61ea-4c11-b1df-814886d310a2" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.423649 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" event={"ID":"58aea9ad-c500-4d8b-ae24-72d3b76e2c93","Type":"ContainerStarted","Data":"8778dc10d678fe632e5a72dcd01a34a577a860a0735443790bb506816bc937f2"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.423912 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.433267 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" event={"ID":"3c4587c1-58af-46c3-b886-59f5e44220fb","Type":"ContainerStarted","Data":"17e4fc5a2fdf6faa71e207c66d73659bea28e5fff78fe91bdf2bbbcdf8758820"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.448804 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" event={"ID":"c0d580f7-424c-482e-a3c5-47ef1b9a6b79","Type":"ContainerStarted","Data":"8276d2ac591cb2ec54de776e4392f63566993a67c660608dfd59c3e5c20278a9"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.449173 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.450831 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.461659 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" event={"ID":"8c8f9bbb-5933-4c81-a7ff-db3f5f74835e","Type":"ContainerStarted","Data":"1c45d6e2a56c16b026accb42165ae6af7e63cff57f6a1cbd6679d5af36ac763a"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.471099 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" podStartSLOduration=40.549755019 podStartE2EDuration="57.47107991s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:25:27.148599297 +0000 UTC m=+1139.709080358" lastFinishedPulling="2025-12-09 12:25:44.069924198 +0000 UTC m=+1156.630405249" observedRunningTime="2025-12-09 12:25:46.453995873 +0000 UTC m=+1159.014476944" watchObservedRunningTime="2025-12-09 12:25:46.47107991 +0000 UTC m=+1159.031560961" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.482907 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" event={"ID":"a8455557-ebef-4da8-af18-aff995d6c3c3","Type":"ContainerStarted","Data":"3a7ad6d144e86d27179b34bbf2485de078a221ec0748ad4060725f7f00da5fa8"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.483707 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.507155 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="956b314977002e8d06761bbcdccd0bb4775a0aa2c665b4316e98475f27106ef3" exitCode=0 Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.507290 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"956b314977002e8d06761bbcdccd0bb4775a0aa2c665b4316e98475f27106ef3"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.507358 4970 scope.go:117] "RemoveContainer" containerID="cba24c2dd5398483042c9e88f615ca653704b38e348947e32056cf594c3cf93e" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.533729 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" event={"ID":"ddb7e093-4817-4ec2-9f81-9779ea2dddc9","Type":"ContainerStarted","Data":"f3facd83edc6c0d951d43e644b0e2b843c179ccda3d1d081b7caa875f7bc4df3"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.539173 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-dh6nb" podStartSLOduration=6.359527443 podStartE2EDuration="57.53915363s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.128524169 +0000 UTC m=+1105.689005220" lastFinishedPulling="2025-12-09 12:25:44.308150356 +0000 UTC m=+1156.868631407" observedRunningTime="2025-12-09 12:25:46.49854165 +0000 UTC m=+1159.059022701" watchObservedRunningTime="2025-12-09 12:25:46.53915363 +0000 UTC m=+1159.099634671" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.569546 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" event={"ID":"c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7","Type":"ContainerStarted","Data":"727bd124017b612a8a361a423ae906c7e6ec1f36740155160eaa4fbf9e2cf48d"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.570512 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.595991 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" podStartSLOduration=7.209832783 podStartE2EDuration="57.595962472s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.786164036 +0000 UTC m=+1106.346645087" lastFinishedPulling="2025-12-09 12:25:44.172293725 +0000 UTC m=+1156.732774776" observedRunningTime="2025-12-09 12:25:46.58674035 +0000 UTC m=+1159.147221391" watchObservedRunningTime="2025-12-09 12:25:46.595962472 +0000 UTC m=+1159.156443523" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.611508 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" event={"ID":"774ea159-6ac5-4997-8630-db954e22ac28","Type":"ContainerStarted","Data":"07205d27b7741d9ab9d7282a939588dd34522ab31f0608fbb378db42b575d96a"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.612568 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.638519 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.643908 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" event={"ID":"9b238f80-d845-4791-a567-08f03974f612","Type":"ContainerStarted","Data":"5f581c0f86f8fe2e9297983c4cb1ec72bfe3e4ec0acc1d2e24534f9eaac5f753"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.674597 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" event={"ID":"d0573bd9-628b-42e9-a46c-5c8b47bd977f","Type":"ContainerStarted","Data":"24ccd2d9daee10c7f82d32e173df2b33941de8b129e967697ffbc01f9c0f6a93"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.675642 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-t9b58" podStartSLOduration=6.020056806 podStartE2EDuration="57.675632318s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:52.67620033 +0000 UTC m=+1105.236681381" lastFinishedPulling="2025-12-09 12:25:44.331775842 +0000 UTC m=+1156.892256893" observedRunningTime="2025-12-09 12:25:46.675163386 +0000 UTC m=+1159.235644437" watchObservedRunningTime="2025-12-09 12:25:46.675632318 +0000 UTC m=+1159.236113369" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.710523 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" event={"ID":"64b64284-dc99-424f-959f-2ed95a4ff4be","Type":"ContainerStarted","Data":"54f1ac8f1e1a2fa73cdfaf42e7d9df8abba9d982a4e603932b2d757606baea0d"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.731051 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.736529 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" podStartSLOduration=7.187566245 podStartE2EDuration="57.736506502s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.784878571 +0000 UTC m=+1106.345359622" lastFinishedPulling="2025-12-09 12:25:44.333818828 +0000 UTC m=+1156.894299879" observedRunningTime="2025-12-09 12:25:46.732693948 +0000 UTC m=+1159.293174999" watchObservedRunningTime="2025-12-09 12:25:46.736506502 +0000 UTC m=+1159.296987553" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.795521 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" event={"ID":"e0b99715-61ea-4c11-b1df-814886d310a2","Type":"ContainerStarted","Data":"7c6e1ae47accc7922138439d85391e89064ed6c168327e94905a1833e6e4c6c7"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.839143 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" event={"ID":"567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4","Type":"ContainerStarted","Data":"3d399a22aeed55bacffdb2a725a3a7c4445000a536ab0209279d40e73e8971ae"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.840155 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.847907 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.850542 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" event={"ID":"0b0a360a-e011-471c-abfa-6b72d7bf3074","Type":"ContainerStarted","Data":"90ef3e4bed81901416ef9ff0cd2803fdab8bf88d0dd5d0f3ea7d70d9c7b9964a"} Dec 09 12:25:46 crc kubenswrapper[4970]: I1209 12:25:46.933806 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" podStartSLOduration=40.972648323 podStartE2EDuration="57.933783282s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:25:27.148575476 +0000 UTC m=+1139.709056527" lastFinishedPulling="2025-12-09 12:25:44.109710435 +0000 UTC m=+1156.670191486" observedRunningTime="2025-12-09 12:25:46.924550259 +0000 UTC m=+1159.485031310" watchObservedRunningTime="2025-12-09 12:25:46.933783282 +0000 UTC m=+1159.494264333" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.034071 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-mcv95" podStartSLOduration=7.743492393 podStartE2EDuration="58.034051481s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.982310695 +0000 UTC m=+1106.542791746" lastFinishedPulling="2025-12-09 12:25:44.272869783 +0000 UTC m=+1156.833350834" observedRunningTime="2025-12-09 12:25:47.033807944 +0000 UTC m=+1159.594288995" watchObservedRunningTime="2025-12-09 12:25:47.034051481 +0000 UTC m=+1159.594532532" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.858171 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" event={"ID":"8c8f9bbb-5933-4c81-a7ff-db3f5f74835e","Type":"ContainerStarted","Data":"a1315e955a725ae61c0a7fec3827606d875532baa641b1c7167c7275bdc294f6"} Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.858303 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.860035 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" event={"ID":"661d70f4-459d-44fe-874b-e24f33654af6","Type":"ContainerStarted","Data":"8f117b4d18a2e49bfe3457a2572a4aef315776ad2c16b01570052a0f589054e2"} Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.860088 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.862643 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" event={"ID":"c60ee2ea-0489-422a-b829-e20040144965","Type":"ContainerStarted","Data":"b7fc095fa60809ca693edd464a61859e18f0a6ada7b23a64c6cfc253994b8239"} Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.864713 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" event={"ID":"13a1643a-f17b-435a-8ce6-60f253571bf2","Type":"ContainerStarted","Data":"d864bdb320a2d9c279cfd784d16aa7211e27eb7885e7943545ac95222f7ef2b0"} Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.865090 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.867363 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" event={"ID":"64b64284-dc99-424f-959f-2ed95a4ff4be","Type":"ContainerStarted","Data":"fc819d23b1482c7323a67727006458e3d6f61a6075ef298080068f32e5bda05f"} Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.870351 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"cb0f9b4763d3228bb2f722a577d4a09c1556ae0fa1f243c7931b87527311654a"} Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.884481 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" podStartSLOduration=8.336868335 podStartE2EDuration="58.884455435s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.789492097 +0000 UTC m=+1106.349973148" lastFinishedPulling="2025-12-09 12:25:44.337079197 +0000 UTC m=+1156.897560248" observedRunningTime="2025-12-09 12:25:47.880449076 +0000 UTC m=+1160.440930127" watchObservedRunningTime="2025-12-09 12:25:47.884455435 +0000 UTC m=+1160.444936496" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.908594 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" podStartSLOduration=7.791949248 podStartE2EDuration="58.908575654s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.101908182 +0000 UTC m=+1105.662389223" lastFinishedPulling="2025-12-09 12:25:44.218534578 +0000 UTC m=+1156.779015629" observedRunningTime="2025-12-09 12:25:47.901903352 +0000 UTC m=+1160.462384403" watchObservedRunningTime="2025-12-09 12:25:47.908575654 +0000 UTC m=+1160.469056705" Dec 09 12:25:47 crc kubenswrapper[4970]: I1209 12:25:47.943973 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" podStartSLOduration=6.068095221 podStartE2EDuration="58.943954381s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.321339317 +0000 UTC m=+1105.881820368" lastFinishedPulling="2025-12-09 12:25:46.197198487 +0000 UTC m=+1158.757679528" observedRunningTime="2025-12-09 12:25:47.940408054 +0000 UTC m=+1160.500889105" watchObservedRunningTime="2025-12-09 12:25:47.943954381 +0000 UTC m=+1160.504435432" Dec 09 12:25:48 crc kubenswrapper[4970]: I1209 12:25:48.905345 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" event={"ID":"ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e","Type":"ContainerStarted","Data":"aff86b78388255c23679566031935a1737e6624fa01979a1c36e82714dda086d"} Dec 09 12:25:48 crc kubenswrapper[4970]: I1209 12:25:48.912353 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" event={"ID":"5a6d30e8-9d6b-47ef-9c17-351179967d04","Type":"ContainerStarted","Data":"a8d017eb237345f6841e70a2a9e9bce40abf52c78704544b4953675b418ebff0"} Dec 09 12:25:48 crc kubenswrapper[4970]: I1209 12:25:48.927622 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" podStartSLOduration=6.830942372 podStartE2EDuration="59.927606545s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.250859341 +0000 UTC m=+1105.811340392" lastFinishedPulling="2025-12-09 12:25:46.347523514 +0000 UTC m=+1158.908004565" observedRunningTime="2025-12-09 12:25:48.927218104 +0000 UTC m=+1161.487699155" watchObservedRunningTime="2025-12-09 12:25:48.927606545 +0000 UTC m=+1161.488087596" Dec 09 12:25:49 crc kubenswrapper[4970]: I1209 12:25:49.822377 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:25:49 crc kubenswrapper[4970]: I1209 12:25:49.919917 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:25:49 crc kubenswrapper[4970]: I1209 12:25:49.920718 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:25:49 crc kubenswrapper[4970]: I1209 12:25:49.942639 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" podStartSLOduration=6.809125445 podStartE2EDuration="1m0.942625226s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.783501594 +0000 UTC m=+1106.343982645" lastFinishedPulling="2025-12-09 12:25:47.917001375 +0000 UTC m=+1160.477482426" observedRunningTime="2025-12-09 12:25:49.938675558 +0000 UTC m=+1162.499156609" watchObservedRunningTime="2025-12-09 12:25:49.942625226 +0000 UTC m=+1162.503106267" Dec 09 12:25:49 crc kubenswrapper[4970]: I1209 12:25:49.964950 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" podStartSLOduration=5.72585674 podStartE2EDuration="1m0.964927116s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:52.67653086 +0000 UTC m=+1105.237011911" lastFinishedPulling="2025-12-09 12:25:47.915601236 +0000 UTC m=+1160.476082287" observedRunningTime="2025-12-09 12:25:49.962771307 +0000 UTC m=+1162.523252358" watchObservedRunningTime="2025-12-09 12:25:49.964927116 +0000 UTC m=+1162.525408167" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.541028 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vb4rz" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.856811 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-zhvtf" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.931477 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" event={"ID":"d0573bd9-628b-42e9-a46c-5c8b47bd977f","Type":"ContainerStarted","Data":"98cad8831b21e7df2a03b820de3fd6b39a9af1ab7eb88fe0243f25d9d02a53aa"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.932452 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.934213 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" event={"ID":"40c6e40b-d51a-482f-b1b7-585a064c9d00","Type":"ContainerStarted","Data":"93dd5a1b2668dab7226f0a2868ce0e7ea6687e229c9f6817c5f7980d51c6d01f"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.936297 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" event={"ID":"e0b99715-61ea-4c11-b1df-814886d310a2","Type":"ContainerStarted","Data":"19161aa05f4aa6e28071a530aa3cb1fd7148f8766da3dbf6d5ec5dceaf1ccb3c"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.936432 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.937995 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" event={"ID":"3c4587c1-58af-46c3-b886-59f5e44220fb","Type":"ContainerStarted","Data":"3e131add90dcb4915380c97c4d4ce6f55cf7719373b783bf2e91060663dad1a0"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.938122 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.942053 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" event={"ID":"ddb7e093-4817-4ec2-9f81-9779ea2dddc9","Type":"ContainerStarted","Data":"eadd42968c8c83b3a2e5a854d85e0aa8469b187b8d353c94696b27e9ae24d396"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.942186 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.944059 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" event={"ID":"0b0a360a-e011-471c-abfa-6b72d7bf3074","Type":"ContainerStarted","Data":"5c2af7a203a71d60d0b2cb4c1021848aa9b5575b644791df65ec0a7cdb71aa4d"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.944134 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.945636 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" event={"ID":"9b238f80-d845-4791-a567-08f03974f612","Type":"ContainerStarted","Data":"40916ac270d4d0a6c10e323daa745a26dffa587492fad5964fd01415900114aa"} Dec 09 12:25:50 crc kubenswrapper[4970]: I1209 12:25:50.967156 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" podStartSLOduration=5.863042909 podStartE2EDuration="1m1.967136298s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.946459876 +0000 UTC m=+1106.506940927" lastFinishedPulling="2025-12-09 12:25:50.050553265 +0000 UTC m=+1162.611034316" observedRunningTime="2025-12-09 12:25:50.960699452 +0000 UTC m=+1163.521180503" watchObservedRunningTime="2025-12-09 12:25:50.967136298 +0000 UTC m=+1163.527617359" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.004699 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" podStartSLOduration=5.186598476 podStartE2EDuration="1m2.004681573s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.231392769 +0000 UTC m=+1105.791873810" lastFinishedPulling="2025-12-09 12:25:50.049475856 +0000 UTC m=+1162.609956907" observedRunningTime="2025-12-09 12:25:50.981297334 +0000 UTC m=+1163.541778385" watchObservedRunningTime="2025-12-09 12:25:51.004681573 +0000 UTC m=+1163.565162624" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.010277 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" podStartSLOduration=4.392886231 podStartE2EDuration="1m2.010259596s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:52.480778741 +0000 UTC m=+1105.041259792" lastFinishedPulling="2025-12-09 12:25:50.098152106 +0000 UTC m=+1162.658633157" observedRunningTime="2025-12-09 12:25:51.002440042 +0000 UTC m=+1163.562921093" watchObservedRunningTime="2025-12-09 12:25:51.010259596 +0000 UTC m=+1163.570740647" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.023190 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" podStartSLOduration=4.144115794 podStartE2EDuration="1m2.023166748s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:52.544085371 +0000 UTC m=+1105.104566422" lastFinishedPulling="2025-12-09 12:25:50.423136325 +0000 UTC m=+1162.983617376" observedRunningTime="2025-12-09 12:25:51.022576662 +0000 UTC m=+1163.583057713" watchObservedRunningTime="2025-12-09 12:25:51.023166748 +0000 UTC m=+1163.583647799" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.043309 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" podStartSLOduration=5.271572849 podStartE2EDuration="1m2.043283828s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.146644904 +0000 UTC m=+1105.707125965" lastFinishedPulling="2025-12-09 12:25:49.918355893 +0000 UTC m=+1162.478836944" observedRunningTime="2025-12-09 12:25:51.039983778 +0000 UTC m=+1163.600464849" watchObservedRunningTime="2025-12-09 12:25:51.043283828 +0000 UTC m=+1163.603764889" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.058824 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2pdw2" podStartSLOduration=4.432895506 podStartE2EDuration="1m1.058803232s" podCreationTimestamp="2025-12-09 12:24:50 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.796770816 +0000 UTC m=+1106.357251867" lastFinishedPulling="2025-12-09 12:25:50.422678542 +0000 UTC m=+1162.983159593" observedRunningTime="2025-12-09 12:25:51.056740546 +0000 UTC m=+1163.617221607" watchObservedRunningTime="2025-12-09 12:25:51.058803232 +0000 UTC m=+1163.619284283" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.126009 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-797ff5dd46-9dm7h" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.149069 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" podStartSLOduration=5.410101193 podStartE2EDuration="1m2.149051378s" podCreationTimestamp="2025-12-09 12:24:49 +0000 UTC" firstStartedPulling="2025-12-09 12:24:53.125882476 +0000 UTC m=+1105.686363527" lastFinishedPulling="2025-12-09 12:25:49.864832661 +0000 UTC m=+1162.425313712" observedRunningTime="2025-12-09 12:25:51.096291886 +0000 UTC m=+1163.656772947" watchObservedRunningTime="2025-12-09 12:25:51.149051378 +0000 UTC m=+1163.709532429" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.156198 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-mcq2h" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.370119 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-smd4b" Dec 09 12:25:51 crc kubenswrapper[4970]: I1209 12:25:51.953336 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:25:52 crc kubenswrapper[4970]: I1209 12:25:52.272683 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8r88z" Dec 09 12:25:52 crc kubenswrapper[4970]: I1209 12:25:52.702503 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-586c894b5-qkxjg" Dec 09 12:25:59 crc kubenswrapper[4970]: I1209 12:25:59.800162 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-9q7rj" Dec 09 12:25:59 crc kubenswrapper[4970]: I1209 12:25:59.828519 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-rcr7g" Dec 09 12:25:59 crc kubenswrapper[4970]: I1209 12:25:59.845050 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-vf6rm" Dec 09 12:25:59 crc kubenswrapper[4970]: I1209 12:25:59.936671 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-rcmhf" Dec 09 12:26:00 crc kubenswrapper[4970]: I1209 12:26:00.152220 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-5sz7k" Dec 09 12:26:00 crc kubenswrapper[4970]: I1209 12:26:00.407941 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-4w8vj" Dec 09 12:26:00 crc kubenswrapper[4970]: I1209 12:26:00.416544 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-rt8fj" Dec 09 12:26:00 crc kubenswrapper[4970]: I1209 12:26:00.445161 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-pvc2v" Dec 09 12:26:00 crc kubenswrapper[4970]: I1209 12:26:00.675972 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-j6znq" Dec 09 12:26:01 crc kubenswrapper[4970]: I1209 12:26:01.140362 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kg572" Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.960613 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bmn7r"] Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.967436 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.976928 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.976928 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.977030 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.977151 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-slwbf" Dec 09 12:26:17 crc kubenswrapper[4970]: I1209 12:26:17.981570 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bmn7r"] Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.049628 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gq2j7"] Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.051393 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.060791 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.071974 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gq2j7"] Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.129063 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd4h7\" (UniqueName: \"kubernetes.io/projected/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-kube-api-access-hd4h7\") pod \"dnsmasq-dns-675f4bcbfc-bmn7r\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.129227 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-config\") pod \"dnsmasq-dns-675f4bcbfc-bmn7r\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.230785 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd4h7\" (UniqueName: \"kubernetes.io/projected/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-kube-api-access-hd4h7\") pod \"dnsmasq-dns-675f4bcbfc-bmn7r\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.230898 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-config\") pod \"dnsmasq-dns-675f4bcbfc-bmn7r\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.230940 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-config\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.230970 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.231059 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2wtx\" (UniqueName: \"kubernetes.io/projected/3ea82bf6-58e0-4894-bd7b-34a965c37c23-kube-api-access-q2wtx\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.232059 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-config\") pod \"dnsmasq-dns-675f4bcbfc-bmn7r\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.266401 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd4h7\" (UniqueName: \"kubernetes.io/projected/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-kube-api-access-hd4h7\") pod \"dnsmasq-dns-675f4bcbfc-bmn7r\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.310795 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.332981 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-config\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.333046 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.333122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2wtx\" (UniqueName: \"kubernetes.io/projected/3ea82bf6-58e0-4894-bd7b-34a965c37c23-kube-api-access-q2wtx\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.338160 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.339077 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-config\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.365129 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2wtx\" (UniqueName: \"kubernetes.io/projected/3ea82bf6-58e0-4894-bd7b-34a965c37c23-kube-api-access-q2wtx\") pod \"dnsmasq-dns-78dd6ddcc-gq2j7\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.385452 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:18 crc kubenswrapper[4970]: W1209 12:26:18.832956 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e58f7b2_2023_45f5_af9e_dc4f225a30c8.slice/crio-3577f15685d272aa84f3a70ffeba5c210f60e7002a2a36b2d2cb871a523446fd WatchSource:0}: Error finding container 3577f15685d272aa84f3a70ffeba5c210f60e7002a2a36b2d2cb871a523446fd: Status 404 returned error can't find the container with id 3577f15685d272aa84f3a70ffeba5c210f60e7002a2a36b2d2cb871a523446fd Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.833520 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bmn7r"] Dec 09 12:26:18 crc kubenswrapper[4970]: I1209 12:26:18.989689 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gq2j7"] Dec 09 12:26:18 crc kubenswrapper[4970]: W1209 12:26:18.999509 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ea82bf6_58e0_4894_bd7b_34a965c37c23.slice/crio-7df5ea6fb2c2a6871a60e7a99d329704d0921d8ca984afa43f3934e9b47c0239 WatchSource:0}: Error finding container 7df5ea6fb2c2a6871a60e7a99d329704d0921d8ca984afa43f3934e9b47c0239: Status 404 returned error can't find the container with id 7df5ea6fb2c2a6871a60e7a99d329704d0921d8ca984afa43f3934e9b47c0239 Dec 09 12:26:19 crc kubenswrapper[4970]: I1209 12:26:19.178766 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" event={"ID":"3ea82bf6-58e0-4894-bd7b-34a965c37c23","Type":"ContainerStarted","Data":"7df5ea6fb2c2a6871a60e7a99d329704d0921d8ca984afa43f3934e9b47c0239"} Dec 09 12:26:19 crc kubenswrapper[4970]: I1209 12:26:19.179667 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" event={"ID":"6e58f7b2-2023-45f5-af9e-dc4f225a30c8","Type":"ContainerStarted","Data":"3577f15685d272aa84f3a70ffeba5c210f60e7002a2a36b2d2cb871a523446fd"} Dec 09 12:26:20 crc kubenswrapper[4970]: I1209 12:26:20.996058 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bmn7r"] Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.024034 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgxrw"] Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.030813 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.053612 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgxrw"] Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.089758 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-config\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.089877 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsgd5\" (UniqueName: \"kubernetes.io/projected/af131622-dd81-4dc0-9ea8-101b02b2aad8-kube-api-access-jsgd5\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.089911 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.190988 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsgd5\" (UniqueName: \"kubernetes.io/projected/af131622-dd81-4dc0-9ea8-101b02b2aad8-kube-api-access-jsgd5\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.191053 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.191141 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-config\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.192048 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-config\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.192717 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.226381 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsgd5\" (UniqueName: \"kubernetes.io/projected/af131622-dd81-4dc0-9ea8-101b02b2aad8-kube-api-access-jsgd5\") pod \"dnsmasq-dns-666b6646f7-mgxrw\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.340532 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gq2j7"] Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.378090 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.403866 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8hwcd"] Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.406817 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.423353 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8hwcd"] Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.525134 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-config\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.525234 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n96g\" (UniqueName: \"kubernetes.io/projected/95bba560-2a79-402a-ade3-e284f9c2a6e6-kube-api-access-8n96g\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.525354 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.629371 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-config\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.629751 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n96g\" (UniqueName: \"kubernetes.io/projected/95bba560-2a79-402a-ade3-e284f9c2a6e6-kube-api-access-8n96g\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.629789 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.630215 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-config\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.632438 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.662322 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n96g\" (UniqueName: \"kubernetes.io/projected/95bba560-2a79-402a-ade3-e284f9c2a6e6-kube-api-access-8n96g\") pod \"dnsmasq-dns-57d769cc4f-8hwcd\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:21 crc kubenswrapper[4970]: I1209 12:26:21.760032 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.071781 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgxrw"] Dec 09 12:26:22 crc kubenswrapper[4970]: W1209 12:26:22.086715 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf131622_dd81_4dc0_9ea8_101b02b2aad8.slice/crio-bef670e2a3cd08de92d40478157bea999e7bc534471db6ba5deb1a0d90930ceb WatchSource:0}: Error finding container bef670e2a3cd08de92d40478157bea999e7bc534471db6ba5deb1a0d90930ceb: Status 404 returned error can't find the container with id bef670e2a3cd08de92d40478157bea999e7bc534471db6ba5deb1a0d90930ceb Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.152640 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.155138 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.158846 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9msks" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.159111 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.159288 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.159545 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.159963 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.159963 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.162706 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.182689 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.217743 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" event={"ID":"af131622-dd81-4dc0-9ea8-101b02b2aad8","Type":"ContainerStarted","Data":"bef670e2a3cd08de92d40478157bea999e7bc534471db6ba5deb1a0d90930ceb"} Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.279386 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8hwcd"] Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344001 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344090 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344127 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344160 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b822b3c-bdfc-4766-b56f-14696c6b34a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344191 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344218 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344262 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344346 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b822b3c-bdfc-4766-b56f-14696c6b34a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344388 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344429 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.344450 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ltf6\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-kube-api-access-5ltf6\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b822b3c-bdfc-4766-b56f-14696c6b34a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446587 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446628 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446653 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ltf6\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-kube-api-access-5ltf6\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446705 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446753 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446787 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446821 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b822b3c-bdfc-4766-b56f-14696c6b34a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446865 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446893 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.446920 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.447604 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.447720 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.447784 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.448190 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-config-data\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.448210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.448664 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.452942 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b822b3c-bdfc-4766-b56f-14696c6b34a0-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.452958 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.453003 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b822b3c-bdfc-4766-b56f-14696c6b34a0-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.460238 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.466699 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ltf6\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-kube-api-access-5ltf6\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.489241 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.491356 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.494587 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.494869 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.495213 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.495384 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.500125 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.500160 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.500375 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-74gz6" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.502591 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.520912 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.650195 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.653790 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.653850 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8fk\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-kube-api-access-gg8fk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.653898 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.654155 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.654294 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.654460 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.654753 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.654944 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.655474 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.655890 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.757592 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.757970 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758004 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758077 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758109 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8fk\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-kube-api-access-gg8fk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758138 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758186 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758264 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758291 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758339 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758735 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.758942 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.759141 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.759965 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.760878 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.761071 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.764613 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.764837 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.765866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.765929 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.777594 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8fk\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-kube-api-access-gg8fk\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.781796 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.790535 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:22 crc kubenswrapper[4970]: I1209 12:26:22.879223 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:26:23 crc kubenswrapper[4970]: I1209 12:26:23.236147 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" event={"ID":"95bba560-2a79-402a-ade3-e284f9c2a6e6","Type":"ContainerStarted","Data":"8b958f237f3fc9e450065dd43dda198fb68a7719beae3cc5024898da6adeeda5"} Dec 09 12:26:23 crc kubenswrapper[4970]: I1209 12:26:23.287715 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:26:23 crc kubenswrapper[4970]: W1209 12:26:23.315868 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b822b3c_bdfc_4766_b56f_14696c6b34a0.slice/crio-bca94329e2451ccc8eb385167f51163925c3ca393fa1de9d60a5c5099bd3ba7b WatchSource:0}: Error finding container bca94329e2451ccc8eb385167f51163925c3ca393fa1de9d60a5c5099bd3ba7b: Status 404 returned error can't find the container with id bca94329e2451ccc8eb385167f51163925c3ca393fa1de9d60a5c5099bd3ba7b Dec 09 12:26:23 crc kubenswrapper[4970]: I1209 12:26:23.463321 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.134046 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.136856 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.140806 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.141701 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.143375 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jnp9k" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.143444 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.148063 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.156089 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292295 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb35edd-0684-4604-87bb-66e26970a864-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292375 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292413 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-kolla-config\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292477 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjgm4\" (UniqueName: \"kubernetes.io/projected/efb35edd-0684-4604-87bb-66e26970a864-kube-api-access-bjgm4\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292511 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-config-data-default\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292537 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/efb35edd-0684-4604-87bb-66e26970a864-config-data-generated\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292582 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb35edd-0684-4604-87bb-66e26970a864-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.292620 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-operator-scripts\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.328452 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b822b3c-bdfc-4766-b56f-14696c6b34a0","Type":"ContainerStarted","Data":"bca94329e2451ccc8eb385167f51163925c3ca393fa1de9d60a5c5099bd3ba7b"} Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.378188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd722f79-8e7d-46eb-b8e2-6da28c0dead2","Type":"ContainerStarted","Data":"7020f076109b3e8b3e6525521f3d06d751f93db47ac32497216670f665fce689"} Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394479 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjgm4\" (UniqueName: \"kubernetes.io/projected/efb35edd-0684-4604-87bb-66e26970a864-kube-api-access-bjgm4\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394563 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-config-data-default\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/efb35edd-0684-4604-87bb-66e26970a864-config-data-generated\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb35edd-0684-4604-87bb-66e26970a864-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394714 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-operator-scripts\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb35edd-0684-4604-87bb-66e26970a864-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394789 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.394823 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-kolla-config\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.395942 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-kolla-config\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.397025 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-config-data-default\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.399020 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/efb35edd-0684-4604-87bb-66e26970a864-config-data-generated\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.399508 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.400647 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/efb35edd-0684-4604-87bb-66e26970a864-operator-scripts\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.427090 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/efb35edd-0684-4604-87bb-66e26970a864-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.481319 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjgm4\" (UniqueName: \"kubernetes.io/projected/efb35edd-0684-4604-87bb-66e26970a864-kube-api-access-bjgm4\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.487556 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efb35edd-0684-4604-87bb-66e26970a864-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.503443 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"efb35edd-0684-4604-87bb-66e26970a864\") " pod="openstack/openstack-galera-0" Dec 09 12:26:24 crc kubenswrapper[4970]: I1209 12:26:24.799703 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.504667 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.508643 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.512698 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.512743 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.512896 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9r7lt" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.513756 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.526012 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638365 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d67f963f-36c2-4056-8b35-5a08e547ba33-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638459 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f963f-36c2-4056-8b35-5a08e547ba33-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638571 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638601 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638638 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638666 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldnq7\" (UniqueName: \"kubernetes.io/projected/d67f963f-36c2-4056-8b35-5a08e547ba33-kube-api-access-ldnq7\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638696 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.638734 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67f963f-36c2-4056-8b35-5a08e547ba33-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.659570 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.660979 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.663398 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.663560 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bndp2" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.678758 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.678923 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741377 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67f963f-36c2-4056-8b35-5a08e547ba33-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741484 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfa0031-fd11-45da-a991-36ef550cf64c-config-data\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741523 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfa0031-fd11-45da-a991-36ef550cf64c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741539 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bbfa0031-fd11-45da-a991-36ef550cf64c-kolla-config\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741565 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfa0031-fd11-45da-a991-36ef550cf64c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741587 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d67f963f-36c2-4056-8b35-5a08e547ba33-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741607 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f963f-36c2-4056-8b35-5a08e547ba33-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741672 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741694 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741718 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741737 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wql8s\" (UniqueName: \"kubernetes.io/projected/bbfa0031-fd11-45da-a991-36ef550cf64c-kube-api-access-wql8s\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741753 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldnq7\" (UniqueName: \"kubernetes.io/projected/d67f963f-36c2-4056-8b35-5a08e547ba33-kube-api-access-ldnq7\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.741774 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.743391 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.745313 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.745525 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.746866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d67f963f-36c2-4056-8b35-5a08e547ba33-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.747008 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d67f963f-36c2-4056-8b35-5a08e547ba33-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.749359 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67f963f-36c2-4056-8b35-5a08e547ba33-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.753284 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f963f-36c2-4056-8b35-5a08e547ba33-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.768401 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldnq7\" (UniqueName: \"kubernetes.io/projected/d67f963f-36c2-4056-8b35-5a08e547ba33-kube-api-access-ldnq7\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.782121 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"d67f963f-36c2-4056-8b35-5a08e547ba33\") " pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.843338 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfa0031-fd11-45da-a991-36ef550cf64c-config-data\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.849361 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfa0031-fd11-45da-a991-36ef550cf64c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.849487 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bbfa0031-fd11-45da-a991-36ef550cf64c-kolla-config\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.849556 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfa0031-fd11-45da-a991-36ef550cf64c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.849850 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wql8s\" (UniqueName: \"kubernetes.io/projected/bbfa0031-fd11-45da-a991-36ef550cf64c-kube-api-access-wql8s\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.851586 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/bbfa0031-fd11-45da-a991-36ef550cf64c-kolla-config\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.854297 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.854934 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbfa0031-fd11-45da-a991-36ef550cf64c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.855173 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/bbfa0031-fd11-45da-a991-36ef550cf64c-config-data\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.858471 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbfa0031-fd11-45da-a991-36ef550cf64c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.874854 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wql8s\" (UniqueName: \"kubernetes.io/projected/bbfa0031-fd11-45da-a991-36ef550cf64c-kube-api-access-wql8s\") pod \"memcached-0\" (UID: \"bbfa0031-fd11-45da-a991-36ef550cf64c\") " pod="openstack/memcached-0" Dec 09 12:26:25 crc kubenswrapper[4970]: I1209 12:26:25.986490 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 12:26:27 crc kubenswrapper[4970]: I1209 12:26:27.751777 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:26:27 crc kubenswrapper[4970]: I1209 12:26:27.753362 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:26:27 crc kubenswrapper[4970]: I1209 12:26:27.756828 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sqdxj" Dec 09 12:26:27 crc kubenswrapper[4970]: I1209 12:26:27.776315 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:26:27 crc kubenswrapper[4970]: I1209 12:26:27.905650 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpxhw\" (UniqueName: \"kubernetes.io/projected/80219567-cf9b-45cf-9e69-21c871e190dc-kube-api-access-zpxhw\") pod \"kube-state-metrics-0\" (UID: \"80219567-cf9b-45cf-9e69-21c871e190dc\") " pod="openstack/kube-state-metrics-0" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.008011 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpxhw\" (UniqueName: \"kubernetes.io/projected/80219567-cf9b-45cf-9e69-21c871e190dc-kube-api-access-zpxhw\") pod \"kube-state-metrics-0\" (UID: \"80219567-cf9b-45cf-9e69-21c871e190dc\") " pod="openstack/kube-state-metrics-0" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.079612 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpxhw\" (UniqueName: \"kubernetes.io/projected/80219567-cf9b-45cf-9e69-21c871e190dc-kube-api-access-zpxhw\") pod \"kube-state-metrics-0\" (UID: \"80219567-cf9b-45cf-9e69-21c871e190dc\") " pod="openstack/kube-state-metrics-0" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.091423 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-sqdxj" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.108348 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.531056 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7"] Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.533427 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.535831 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.540996 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-8mdvf" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.579612 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7"] Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.619863 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/582b6a7e-cfed-498e-af7e-f93ffe3ad4bd-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-826l7\" (UID: \"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.620348 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzg8g\" (UniqueName: \"kubernetes.io/projected/582b6a7e-cfed-498e-af7e-f93ffe3ad4bd-kube-api-access-wzg8g\") pod \"observability-ui-dashboards-7d5fb4cbfb-826l7\" (UID: \"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.722601 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzg8g\" (UniqueName: \"kubernetes.io/projected/582b6a7e-cfed-498e-af7e-f93ffe3ad4bd-kube-api-access-wzg8g\") pod \"observability-ui-dashboards-7d5fb4cbfb-826l7\" (UID: \"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.722672 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/582b6a7e-cfed-498e-af7e-f93ffe3ad4bd-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-826l7\" (UID: \"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.729004 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/582b6a7e-cfed-498e-af7e-f93ffe3ad4bd-serving-cert\") pod \"observability-ui-dashboards-7d5fb4cbfb-826l7\" (UID: \"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.744002 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzg8g\" (UniqueName: \"kubernetes.io/projected/582b6a7e-cfed-498e-af7e-f93ffe3ad4bd-kube-api-access-wzg8g\") pod \"observability-ui-dashboards-7d5fb4cbfb-826l7\" (UID: \"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd\") " pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.862149 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.872411 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6cfd6d645d-g58lf"] Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.873815 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:28 crc kubenswrapper[4970]: I1209 12:26:28.899599 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6cfd6d645d-g58lf"] Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029107 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-trusted-ca-bundle\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029171 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6eadead4-f607-480a-a9ee-506be45e6a72-console-serving-cert\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-console-config\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029225 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6eadead4-f607-480a-a9ee-506be45e6a72-console-oauth-config\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029321 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-service-ca\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029343 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-oauth-serving-cert\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.029377 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg7mc\" (UniqueName: \"kubernetes.io/projected/6eadead4-f607-480a-a9ee-506be45e6a72-kube-api-access-dg7mc\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.069296 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.073209 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.076904 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.077118 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-fghm5" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.077363 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.077420 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.077516 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.086352 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.092575 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131176 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb3e4622-df1b-4d70-8683-8672cebf6666-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131238 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-trusted-ca-bundle\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131281 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131304 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj6zc\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-kube-api-access-jj6zc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131326 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6eadead4-f607-480a-a9ee-506be45e6a72-console-serving-cert\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131442 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-console-config\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6eadead4-f607-480a-a9ee-506be45e6a72-console-oauth-config\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131581 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131662 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-service-ca\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131713 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-oauth-serving-cert\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131783 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb3e4622-df1b-4d70-8683-8672cebf6666-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131847 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg7mc\" (UniqueName: \"kubernetes.io/projected/6eadead4-f607-480a-a9ee-506be45e6a72-kube-api-access-dg7mc\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.131933 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.132389 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-console-config\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.132778 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-oauth-serving-cert\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.132844 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-service-ca\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.135109 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6eadead4-f607-480a-a9ee-506be45e6a72-trusted-ca-bundle\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.137890 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6eadead4-f607-480a-a9ee-506be45e6a72-console-oauth-config\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.143341 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6eadead4-f607-480a-a9ee-506be45e6a72-console-serving-cert\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.157538 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg7mc\" (UniqueName: \"kubernetes.io/projected/6eadead4-f607-480a-a9ee-506be45e6a72-kube-api-access-dg7mc\") pod \"console-6cfd6d645d-g58lf\" (UID: \"6eadead4-f607-480a-a9ee-506be45e6a72\") " pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.211752 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.234616 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb3e4622-df1b-4d70-8683-8672cebf6666-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.234724 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.234787 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.235305 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.235839 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb3e4622-df1b-4d70-8683-8672cebf6666-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.236032 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.236088 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj6zc\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-kube-api-access-jj6zc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.236195 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.236266 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.237352 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb3e4622-df1b-4d70-8683-8672cebf6666-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.238519 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb3e4622-df1b-4d70-8683-8672cebf6666-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.252910 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.254032 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.255812 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj6zc\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-kube-api-access-jj6zc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.258234 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.261465 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.300370 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:29 crc kubenswrapper[4970]: I1209 12:26:29.519218 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.006361 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hqfv8"] Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.009405 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.013168 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v77pc" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.016557 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.016615 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.017813 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqfv8"] Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.043070 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-d9vtq"] Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.046563 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.065515 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d9vtq"] Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.090923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsxx\" (UniqueName: \"kubernetes.io/projected/c06ee73b-4168-4ef3-b268-db5e976febbf-kube-api-access-cqsxx\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091102 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c06ee73b-4168-4ef3-b268-db5e976febbf-ovn-controller-tls-certs\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091173 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdtr\" (UniqueName: \"kubernetes.io/projected/30b05330-faf4-44e1-afee-1c750e234a37-kube-api-access-jsdtr\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091218 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-run-ovn\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091358 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b05330-faf4-44e1-afee-1c750e234a37-scripts\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091453 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-log\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091509 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c06ee73b-4168-4ef3-b268-db5e976febbf-scripts\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091583 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-run\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091624 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-etc-ovs\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.091873 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06ee73b-4168-4ef3-b268-db5e976febbf-combined-ca-bundle\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.092001 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-lib\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.092116 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-run\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.092313 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-log-ovn\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194480 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c06ee73b-4168-4ef3-b268-db5e976febbf-scripts\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194558 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-run\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194583 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-etc-ovs\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194644 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06ee73b-4168-4ef3-b268-db5e976febbf-combined-ca-bundle\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194670 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-lib\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194685 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-run\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194708 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-log-ovn\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194743 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsxx\" (UniqueName: \"kubernetes.io/projected/c06ee73b-4168-4ef3-b268-db5e976febbf-kube-api-access-cqsxx\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194777 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c06ee73b-4168-4ef3-b268-db5e976febbf-ovn-controller-tls-certs\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194795 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdtr\" (UniqueName: \"kubernetes.io/projected/30b05330-faf4-44e1-afee-1c750e234a37-kube-api-access-jsdtr\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194817 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-run-ovn\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194841 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b05330-faf4-44e1-afee-1c750e234a37-scripts\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.194860 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-log\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.195459 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-log\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.197572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-log-ovn\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.198005 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-etc-ovs\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.198491 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-run\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.199121 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/30b05330-faf4-44e1-afee-1c750e234a37-var-lib\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.199369 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-run\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.200456 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c06ee73b-4168-4ef3-b268-db5e976febbf-scripts\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.200866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30b05330-faf4-44e1-afee-1c750e234a37-scripts\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.201959 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c06ee73b-4168-4ef3-b268-db5e976febbf-var-run-ovn\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.202631 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c06ee73b-4168-4ef3-b268-db5e976febbf-ovn-controller-tls-certs\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.203072 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c06ee73b-4168-4ef3-b268-db5e976febbf-combined-ca-bundle\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.216958 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdtr\" (UniqueName: \"kubernetes.io/projected/30b05330-faf4-44e1-afee-1c750e234a37-kube-api-access-jsdtr\") pod \"ovn-controller-ovs-d9vtq\" (UID: \"30b05330-faf4-44e1-afee-1c750e234a37\") " pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.219027 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsxx\" (UniqueName: \"kubernetes.io/projected/c06ee73b-4168-4ef3-b268-db5e976febbf-kube-api-access-cqsxx\") pod \"ovn-controller-hqfv8\" (UID: \"c06ee73b-4168-4ef3-b268-db5e976febbf\") " pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.334159 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqfv8" Dec 09 12:26:32 crc kubenswrapper[4970]: I1209 12:26:32.372899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.407643 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.410234 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.414448 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.414710 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.414848 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-42hp7" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.414976 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.415801 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.417042 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.447633 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.447773 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.447801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9rzj\" (UniqueName: \"kubernetes.io/projected/84ec21d9-6227-439f-984f-1d48a7fdd5b9-kube-api-access-g9rzj\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.447835 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.447851 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84ec21d9-6227-439f-984f-1d48a7fdd5b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.448996 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ec21d9-6227-439f-984f-1d48a7fdd5b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.449029 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.449087 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/84ec21d9-6227-439f-984f-1d48a7fdd5b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.550742 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/84ec21d9-6227-439f-984f-1d48a7fdd5b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551080 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551227 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551353 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9rzj\" (UniqueName: \"kubernetes.io/projected/84ec21d9-6227-439f-984f-1d48a7fdd5b9-kube-api-access-g9rzj\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551543 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84ec21d9-6227-439f-984f-1d48a7fdd5b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551644 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ec21d9-6227-439f-984f-1d48a7fdd5b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551744 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551368 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/84ec21d9-6227-439f-984f-1d48a7fdd5b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.551454 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.552368 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ec21d9-6227-439f-984f-1d48a7fdd5b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.552799 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/84ec21d9-6227-439f-984f-1d48a7fdd5b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.557030 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.557151 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.561205 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84ec21d9-6227-439f-984f-1d48a7fdd5b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.574908 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9rzj\" (UniqueName: \"kubernetes.io/projected/84ec21d9-6227-439f-984f-1d48a7fdd5b9-kube-api-access-g9rzj\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.583694 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"84ec21d9-6227-439f-984f-1d48a7fdd5b9\") " pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:34 crc kubenswrapper[4970]: I1209 12:26:34.748708 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.266592 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.269056 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.271868 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.272438 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.272471 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.272578 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6r4kt" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.286325 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468123 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468222 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468298 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310f94d5-9c85-470d-a381-a34ea67ba43b-config\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468367 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49krs\" (UniqueName: \"kubernetes.io/projected/310f94d5-9c85-470d-a381-a34ea67ba43b-kube-api-access-49krs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468551 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468616 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/310f94d5-9c85-470d-a381-a34ea67ba43b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468707 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/310f94d5-9c85-470d-a381-a34ea67ba43b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.468798 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570445 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570545 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570605 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310f94d5-9c85-470d-a381-a34ea67ba43b-config\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570635 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49krs\" (UniqueName: \"kubernetes.io/projected/310f94d5-9c85-470d-a381-a34ea67ba43b-kube-api-access-49krs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570693 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570728 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/310f94d5-9c85-470d-a381-a34ea67ba43b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570768 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/310f94d5-9c85-470d-a381-a34ea67ba43b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.570840 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.571068 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.572574 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/310f94d5-9c85-470d-a381-a34ea67ba43b-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.573632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/310f94d5-9c85-470d-a381-a34ea67ba43b-config\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.575034 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/310f94d5-9c85-470d-a381-a34ea67ba43b-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.581457 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.581569 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.582018 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/310f94d5-9c85-470d-a381-a34ea67ba43b-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.591138 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49krs\" (UniqueName: \"kubernetes.io/projected/310f94d5-9c85-470d-a381-a34ea67ba43b-kube-api-access-49krs\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.597767 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"310f94d5-9c85-470d-a381-a34ea67ba43b\") " pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:35 crc kubenswrapper[4970]: I1209 12:26:35.613589 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 12:26:40 crc kubenswrapper[4970]: E1209 12:26:40.927429 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:40 crc kubenswrapper[4970]: E1209 12:26:40.928118 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q2wtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-gq2j7_openstack(3ea82bf6-58e0-4894-bd7b-34a965c37c23): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:40 crc kubenswrapper[4970]: E1209 12:26:40.929314 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" podUID="3ea82bf6-58e0-4894-bd7b-34a965c37c23" Dec 09 12:26:40 crc kubenswrapper[4970]: E1209 12:26:40.955994 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:40 crc kubenswrapper[4970]: E1209 12:26:40.956185 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsgd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-mgxrw_openstack(af131622-dd81-4dc0-9ea8-101b02b2aad8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:40 crc kubenswrapper[4970]: E1209 12:26:40.957408 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" Dec 09 12:26:41 crc kubenswrapper[4970]: E1209 12:26:41.021740 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 09 12:26:41 crc kubenswrapper[4970]: E1209 12:26:41.021912 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hd4h7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bmn7r_openstack(6e58f7b2-2023-45f5-af9e-dc4f225a30c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:41 crc kubenswrapper[4970]: E1209 12:26:41.023115 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" podUID="6e58f7b2-2023-45f5-af9e-dc4f225a30c8" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.654959 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.669903 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.708831 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.753021 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-config\") pod \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.753188 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-config\") pod \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.753318 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2wtx\" (UniqueName: \"kubernetes.io/projected/3ea82bf6-58e0-4894-bd7b-34a965c37c23-kube-api-access-q2wtx\") pod \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.753445 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-dns-svc\") pod \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\" (UID: \"3ea82bf6-58e0-4894-bd7b-34a965c37c23\") " Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.753501 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd4h7\" (UniqueName: \"kubernetes.io/projected/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-kube-api-access-hd4h7\") pod \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\" (UID: \"6e58f7b2-2023-45f5-af9e-dc4f225a30c8\") " Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.755669 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-config" (OuterVolumeSpecName: "config") pod "6e58f7b2-2023-45f5-af9e-dc4f225a30c8" (UID: "6e58f7b2-2023-45f5-af9e-dc4f225a30c8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.756212 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ea82bf6-58e0-4894-bd7b-34a965c37c23" (UID: "3ea82bf6-58e0-4894-bd7b-34a965c37c23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.757723 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-config" (OuterVolumeSpecName: "config") pod "3ea82bf6-58e0-4894-bd7b-34a965c37c23" (UID: "3ea82bf6-58e0-4894-bd7b-34a965c37c23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.761680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea82bf6-58e0-4894-bd7b-34a965c37c23-kube-api-access-q2wtx" (OuterVolumeSpecName: "kube-api-access-q2wtx") pod "3ea82bf6-58e0-4894-bd7b-34a965c37c23" (UID: "3ea82bf6-58e0-4894-bd7b-34a965c37c23"). InnerVolumeSpecName "kube-api-access-q2wtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.762624 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-kube-api-access-hd4h7" (OuterVolumeSpecName: "kube-api-access-hd4h7") pod "6e58f7b2-2023-45f5-af9e-dc4f225a30c8" (UID: "6e58f7b2-2023-45f5-af9e-dc4f225a30c8"). InnerVolumeSpecName "kube-api-access-hd4h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.856636 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.856997 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2wtx\" (UniqueName: \"kubernetes.io/projected/3ea82bf6-58e0-4894-bd7b-34a965c37c23-kube-api-access-q2wtx\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.857131 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.857272 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd4h7\" (UniqueName: \"kubernetes.io/projected/6e58f7b2-2023-45f5-af9e-dc4f225a30c8-kube-api-access-hd4h7\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.857444 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ea82bf6-58e0-4894-bd7b-34a965c37c23-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.966585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" event={"ID":"6e58f7b2-2023-45f5-af9e-dc4f225a30c8","Type":"ContainerDied","Data":"3577f15685d272aa84f3a70ffeba5c210f60e7002a2a36b2d2cb871a523446fd"} Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.966883 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bmn7r" Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.983385 4970 generic.go:334] "Generic (PLEG): container finished" podID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerID="a963568281078a57109c1a5a8045c871c6c1edd9f2824831ea1149975e6d6b05" exitCode=0 Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.983486 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" event={"ID":"af131622-dd81-4dc0-9ea8-101b02b2aad8","Type":"ContainerDied","Data":"a963568281078a57109c1a5a8045c871c6c1edd9f2824831ea1149975e6d6b05"} Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.989454 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bbfa0031-fd11-45da-a991-36ef550cf64c","Type":"ContainerStarted","Data":"c29001cf2c7200b617675d4491352aadbf0fadd79f8e6af536db0f820541db7b"} Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.991601 4970 generic.go:334] "Generic (PLEG): container finished" podID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerID="3a5a7acd576f01d2bb8419b9741af4b049c8622813081b4eccd4ac763f80981d" exitCode=0 Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.991671 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" event={"ID":"95bba560-2a79-402a-ade3-e284f9c2a6e6","Type":"ContainerDied","Data":"3a5a7acd576f01d2bb8419b9741af4b049c8622813081b4eccd4ac763f80981d"} Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.995878 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" event={"ID":"3ea82bf6-58e0-4894-bd7b-34a965c37c23","Type":"ContainerDied","Data":"7df5ea6fb2c2a6871a60e7a99d329704d0921d8ca984afa43f3934e9b47c0239"} Dec 09 12:26:42 crc kubenswrapper[4970]: I1209 12:26:42.995915 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-gq2j7" Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.078956 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bmn7r"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.079017 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bmn7r"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.320316 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gq2j7"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.327504 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-gq2j7"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.393280 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6cfd6d645d-g58lf"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.405172 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.413834 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.422728 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.432412 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.438942 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.740905 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqfv8"] Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.831678 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea82bf6-58e0-4894-bd7b-34a965c37c23" path="/var/lib/kubelet/pods/3ea82bf6-58e0-4894-bd7b-34a965c37c23/volumes" Dec 09 12:26:43 crc kubenswrapper[4970]: I1209 12:26:43.832567 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e58f7b2-2023-45f5-af9e-dc4f225a30c8" path="/var/lib/kubelet/pods/6e58f7b2-2023-45f5-af9e-dc4f225a30c8/volumes" Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.009333 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"efb35edd-0684-4604-87bb-66e26970a864","Type":"ContainerStarted","Data":"4ddc877741fcb01df49e2410a913cead230c493184c024d0b7017440a25184e8"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.011141 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d67f963f-36c2-4056-8b35-5a08e547ba33","Type":"ContainerStarted","Data":"d3cdacee1c4be24e22112ecebf58987469d27582f2b0e212babc7d946ea72606"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.013131 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6cfd6d645d-g58lf" event={"ID":"6eadead4-f607-480a-a9ee-506be45e6a72","Type":"ContainerStarted","Data":"d0b90f157824c9d996e895ad5ea6b08122257120b146f485c7b80028849a0847"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.014171 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" event={"ID":"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd","Type":"ContainerStarted","Data":"0db8f4fc6ee988a467b78195c5efe95eea8d987e25215c0f1b558a2d3dc5fca7"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.015289 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqfv8" event={"ID":"c06ee73b-4168-4ef3-b268-db5e976febbf","Type":"ContainerStarted","Data":"5934657bc2531ef5cb396330db3e385b1cb9d88400c06f6f33d13d65e9c427eb"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.016474 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerStarted","Data":"39dfccdecc60ea150fb4f5a123d301a663b23ced22358c66148a8ce2dad9d073"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.018002 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"80219567-cf9b-45cf-9e69-21c871e190dc","Type":"ContainerStarted","Data":"be02f8b772ae7039cbe961e7457021681a0408aec3dd0a3c35d35181263b950b"} Dec 09 12:26:44 crc kubenswrapper[4970]: I1209 12:26:44.872427 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-d9vtq"] Dec 09 12:26:45 crc kubenswrapper[4970]: I1209 12:26:45.028333 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d9vtq" event={"ID":"30b05330-faf4-44e1-afee-1c750e234a37","Type":"ContainerStarted","Data":"65c598854a2867ac2f86697e9cd9512c1be9e0c3c69fd811f720c05c5587e501"} Dec 09 12:26:45 crc kubenswrapper[4970]: I1209 12:26:45.388720 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 12:26:45 crc kubenswrapper[4970]: W1209 12:26:45.391137 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod310f94d5_9c85_470d_a381_a34ea67ba43b.slice/crio-917a7501d10adaacc6ebe31a508d479a6ffb11248eaf290409b1aece396e07db WatchSource:0}: Error finding container 917a7501d10adaacc6ebe31a508d479a6ffb11248eaf290409b1aece396e07db: Status 404 returned error can't find the container with id 917a7501d10adaacc6ebe31a508d479a6ffb11248eaf290409b1aece396e07db Dec 09 12:26:45 crc kubenswrapper[4970]: I1209 12:26:45.791666 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 12:26:45 crc kubenswrapper[4970]: W1209 12:26:45.799750 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84ec21d9_6227_439f_984f_1d48a7fdd5b9.slice/crio-b2678fe9fb9a485583bbcf6e496a16b91afa7d9ccf0cf6a04a96cdaec4fefcb5 WatchSource:0}: Error finding container b2678fe9fb9a485583bbcf6e496a16b91afa7d9ccf0cf6a04a96cdaec4fefcb5: Status 404 returned error can't find the container with id b2678fe9fb9a485583bbcf6e496a16b91afa7d9ccf0cf6a04a96cdaec4fefcb5 Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.041085 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" event={"ID":"95bba560-2a79-402a-ade3-e284f9c2a6e6","Type":"ContainerStarted","Data":"76b92d99145deda0258815c9079ee715e1fe4e59052e3f76a363608ad200b9bd"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.041604 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.043413 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd722f79-8e7d-46eb-b8e2-6da28c0dead2","Type":"ContainerStarted","Data":"eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.046261 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6cfd6d645d-g58lf" event={"ID":"6eadead4-f607-480a-a9ee-506be45e6a72","Type":"ContainerStarted","Data":"bf5796ce75215c52c571afc02611872a459a8e83f46a4378bb7f9be8c0788265"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.048109 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"84ec21d9-6227-439f-984f-1d48a7fdd5b9","Type":"ContainerStarted","Data":"b2678fe9fb9a485583bbcf6e496a16b91afa7d9ccf0cf6a04a96cdaec4fefcb5"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.050675 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b822b3c-bdfc-4766-b56f-14696c6b34a0","Type":"ContainerStarted","Data":"e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.054039 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" event={"ID":"af131622-dd81-4dc0-9ea8-101b02b2aad8","Type":"ContainerStarted","Data":"f114ccefe8242e275d6ed6a85620e0c2a0f988f3cffdbc72a6c6a181144e4f22"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.054630 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.055826 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"310f94d5-9c85-470d-a381-a34ea67ba43b","Type":"ContainerStarted","Data":"917a7501d10adaacc6ebe31a508d479a6ffb11248eaf290409b1aece396e07db"} Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.076991 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" podStartSLOduration=5.215933794 podStartE2EDuration="25.076967562s" podCreationTimestamp="2025-12-09 12:26:21 +0000 UTC" firstStartedPulling="2025-12-09 12:26:22.300537057 +0000 UTC m=+1194.861018108" lastFinishedPulling="2025-12-09 12:26:42.161570825 +0000 UTC m=+1214.722051876" observedRunningTime="2025-12-09 12:26:46.060358982 +0000 UTC m=+1218.620840053" watchObservedRunningTime="2025-12-09 12:26:46.076967562 +0000 UTC m=+1218.637448633" Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.135652 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" podStartSLOduration=-9223372010.71914 podStartE2EDuration="26.13563592s" podCreationTimestamp="2025-12-09 12:26:20 +0000 UTC" firstStartedPulling="2025-12-09 12:26:22.088778884 +0000 UTC m=+1194.649259935" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:46.131917929 +0000 UTC m=+1218.692398980" watchObservedRunningTime="2025-12-09 12:26:46.13563592 +0000 UTC m=+1218.696116971" Dec 09 12:26:46 crc kubenswrapper[4970]: I1209 12:26:46.150855 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6cfd6d645d-g58lf" podStartSLOduration=18.150837501 podStartE2EDuration="18.150837501s" podCreationTimestamp="2025-12-09 12:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:26:46.150522673 +0000 UTC m=+1218.711003744" watchObservedRunningTime="2025-12-09 12:26:46.150837501 +0000 UTC m=+1218.711318552" Dec 09 12:26:49 crc kubenswrapper[4970]: I1209 12:26:49.213129 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:49 crc kubenswrapper[4970]: I1209 12:26:49.214076 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:49 crc kubenswrapper[4970]: I1209 12:26:49.218003 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:50 crc kubenswrapper[4970]: I1209 12:26:50.097659 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6cfd6d645d-g58lf" Dec 09 12:26:50 crc kubenswrapper[4970]: I1209 12:26:50.192281 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-598d8f6f5c-75bbn"] Dec 09 12:26:51 crc kubenswrapper[4970]: I1209 12:26:51.380046 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:51 crc kubenswrapper[4970]: I1209 12:26:51.762491 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:26:51 crc kubenswrapper[4970]: I1209 12:26:51.832374 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgxrw"] Dec 09 12:26:52 crc kubenswrapper[4970]: I1209 12:26:52.113118 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="dnsmasq-dns" containerID="cri-o://f114ccefe8242e275d6ed6a85620e0c2a0f988f3cffdbc72a6c6a181144e4f22" gracePeriod=10 Dec 09 12:26:56 crc kubenswrapper[4970]: I1209 12:26:56.157479 4970 generic.go:334] "Generic (PLEG): container finished" podID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerID="f114ccefe8242e275d6ed6a85620e0c2a0f988f3cffdbc72a6c6a181144e4f22" exitCode=0 Dec 09 12:26:56 crc kubenswrapper[4970]: I1209 12:26:56.157602 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" event={"ID":"af131622-dd81-4dc0-9ea8-101b02b2aad8","Type":"ContainerDied","Data":"f114ccefe8242e275d6ed6a85620e0c2a0f988f3cffdbc72a6c6a181144e4f22"} Dec 09 12:26:56 crc kubenswrapper[4970]: I1209 12:26:56.378831 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Dec 09 12:26:58 crc kubenswrapper[4970]: E1209 12:26:58.172428 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Dec 09 12:26:58 crc kubenswrapper[4970]: E1209 12:26:58.173951 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bbh595h599h565h56ch58bh56fh55h684h5dbh58ch549h55chfbh589h65bhbdh65bh89h557h5b9h56bh66h8bh577h565h655hc9h594h695h57ch88q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jsdtr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-d9vtq_openstack(30b05330-faf4-44e1-afee-1c750e234a37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:58 crc kubenswrapper[4970]: E1209 12:26:58.175676 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-d9vtq" podUID="30b05330-faf4-44e1-afee-1c750e234a37" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.113611 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.114205 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bbh595h599h565h56ch58bh56fh55h684h5dbh58ch549h55chfbh589h65bhbdh65bh89h557h5b9h56bh66h8bh577h565h655hc9h594h695h57ch88q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cqsxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-hqfv8_openstack(c06ee73b-4168-4ef3-b268-db5e976febbf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.115520 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-hqfv8" podUID="c06ee73b-4168-4ef3-b268-db5e976febbf" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.195294 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-d9vtq" podUID="30b05330-faf4-44e1-afee-1c750e234a37" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.195329 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-hqfv8" podUID="c06ee73b-4168-4ef3-b268-db5e976febbf" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.518735 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.518926 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:observability-ui-dashboards,Image:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,Command:[],Args:[-port=9443 -cert=/var/serving-cert/tls.crt -key=/var/serving-cert/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:true,MountPath:/var/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wzg8g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-ui-dashboards-7d5fb4cbfb-826l7_openshift-operators(582b6a7e-cfed-498e-af7e-f93ffe3ad4bd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 12:26:59 crc kubenswrapper[4970]: E1209 12:26:59.520733 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" podUID="582b6a7e-cfed-498e-af7e-f93ffe3ad4bd" Dec 09 12:26:59 crc kubenswrapper[4970]: I1209 12:26:59.865954 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:26:59 crc kubenswrapper[4970]: I1209 12:26:59.974814 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-config\") pod \"af131622-dd81-4dc0-9ea8-101b02b2aad8\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " Dec 09 12:26:59 crc kubenswrapper[4970]: I1209 12:26:59.975084 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-dns-svc\") pod \"af131622-dd81-4dc0-9ea8-101b02b2aad8\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " Dec 09 12:26:59 crc kubenswrapper[4970]: I1209 12:26:59.975135 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsgd5\" (UniqueName: \"kubernetes.io/projected/af131622-dd81-4dc0-9ea8-101b02b2aad8-kube-api-access-jsgd5\") pod \"af131622-dd81-4dc0-9ea8-101b02b2aad8\" (UID: \"af131622-dd81-4dc0-9ea8-101b02b2aad8\") " Dec 09 12:26:59 crc kubenswrapper[4970]: I1209 12:26:59.990468 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af131622-dd81-4dc0-9ea8-101b02b2aad8-kube-api-access-jsgd5" (OuterVolumeSpecName: "kube-api-access-jsgd5") pod "af131622-dd81-4dc0-9ea8-101b02b2aad8" (UID: "af131622-dd81-4dc0-9ea8-101b02b2aad8"). InnerVolumeSpecName "kube-api-access-jsgd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.024432 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af131622-dd81-4dc0-9ea8-101b02b2aad8" (UID: "af131622-dd81-4dc0-9ea8-101b02b2aad8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.025938 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-config" (OuterVolumeSpecName: "config") pod "af131622-dd81-4dc0-9ea8-101b02b2aad8" (UID: "af131622-dd81-4dc0-9ea8-101b02b2aad8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.077966 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.077994 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsgd5\" (UniqueName: \"kubernetes.io/projected/af131622-dd81-4dc0-9ea8-101b02b2aad8-kube-api-access-jsgd5\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.078006 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af131622-dd81-4dc0-9ea8-101b02b2aad8-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.203182 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.203202 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mgxrw" event={"ID":"af131622-dd81-4dc0-9ea8-101b02b2aad8","Type":"ContainerDied","Data":"bef670e2a3cd08de92d40478157bea999e7bc534471db6ba5deb1a0d90930ceb"} Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.203267 4970 scope.go:117] "RemoveContainer" containerID="f114ccefe8242e275d6ed6a85620e0c2a0f988f3cffdbc72a6c6a181144e4f22" Dec 09 12:27:00 crc kubenswrapper[4970]: E1209 12:27:00.204990 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"observability-ui-dashboards\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb\\\"\"" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" podUID="582b6a7e-cfed-498e-af7e-f93ffe3ad4bd" Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.254149 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgxrw"] Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.262292 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mgxrw"] Dec 09 12:27:00 crc kubenswrapper[4970]: I1209 12:27:00.607344 4970 scope.go:117] "RemoveContainer" containerID="a963568281078a57109c1a5a8045c871c6c1edd9f2824831ea1149975e6d6b05" Dec 09 12:27:00 crc kubenswrapper[4970]: E1209 12:27:00.960903 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 12:27:00 crc kubenswrapper[4970]: E1209 12:27:00.960972 4970 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 12:27:00 crc kubenswrapper[4970]: E1209 12:27:00.961129 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zpxhw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(80219567-cf9b-45cf-9e69-21c871e190dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Dec 09 12:27:00 crc kubenswrapper[4970]: E1209 12:27:00.962516 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" Dec 09 12:27:01 crc kubenswrapper[4970]: E1209 12:27:01.215333 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" Dec 09 12:27:01 crc kubenswrapper[4970]: I1209 12:27:01.284990 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:27:01 crc kubenswrapper[4970]: I1209 12:27:01.838354 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" path="/var/lib/kubelet/pods/af131622-dd81-4dc0-9ea8-101b02b2aad8/volumes" Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.233140 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"bbfa0031-fd11-45da-a991-36ef550cf64c","Type":"ContainerStarted","Data":"97e40c1170c6b87215e913aea63a29c97138d80bae0c28abc73d34ff9e83f35e"} Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.233304 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.235562 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d67f963f-36c2-4056-8b35-5a08e547ba33","Type":"ContainerStarted","Data":"a37531fcd3e3a7dd6bb6fa0087f9630e1a245ab95e9b6d821ef0ea66c926fc6a"} Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.237522 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"84ec21d9-6227-439f-984f-1d48a7fdd5b9","Type":"ContainerStarted","Data":"aa79f96756c96f5bdfa4f31f1af06cdeb7e8a045a06f5157783d8794f600271e"} Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.246954 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"efb35edd-0684-4604-87bb-66e26970a864","Type":"ContainerStarted","Data":"c743992463e8cdbc678a09dc7724ef0756da718a5cb8603ec9f312ad6a7ad9b7"} Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.248644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"310f94d5-9c85-470d-a381-a34ea67ba43b","Type":"ContainerStarted","Data":"bb48cf229d5d693d66950e2e2e16169117044fd9eaf5a4b37ece99da6c92f6fe"} Dec 09 12:27:02 crc kubenswrapper[4970]: I1209 12:27:02.263338 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=20.463179876 podStartE2EDuration="37.263307712s" podCreationTimestamp="2025-12-09 12:26:25 +0000 UTC" firstStartedPulling="2025-12-09 12:26:42.709453538 +0000 UTC m=+1215.269934589" lastFinishedPulling="2025-12-09 12:26:59.509581374 +0000 UTC m=+1232.070062425" observedRunningTime="2025-12-09 12:27:02.251967945 +0000 UTC m=+1234.812449006" watchObservedRunningTime="2025-12-09 12:27:02.263307712 +0000 UTC m=+1234.823788763" Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.278705 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerStarted","Data":"c06514286bb111c9e0be623a217a45c85650f86040dfd0c6d5135e2298a5d7a4"} Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.282454 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"84ec21d9-6227-439f-984f-1d48a7fdd5b9","Type":"ContainerStarted","Data":"af877467a00fb5e2855c82bf2d25f85a3db2da62640b1abc88a5e24a06bef804"} Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.285027 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"310f94d5-9c85-470d-a381-a34ea67ba43b","Type":"ContainerStarted","Data":"ddd4f17f8b745cfa7d5a173848cc4a01feb49fe733984f2e3ecda49616dc70df"} Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.348722 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.747752499 podStartE2EDuration="31.348702828s" podCreationTimestamp="2025-12-09 12:26:34 +0000 UTC" firstStartedPulling="2025-12-09 12:26:45.394018063 +0000 UTC m=+1217.954499114" lastFinishedPulling="2025-12-09 12:27:04.994968392 +0000 UTC m=+1237.555449443" observedRunningTime="2025-12-09 12:27:05.323210298 +0000 UTC m=+1237.883691359" watchObservedRunningTime="2025-12-09 12:27:05.348702828 +0000 UTC m=+1237.909183879" Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.349473 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.168653754 podStartE2EDuration="32.349465619s" podCreationTimestamp="2025-12-09 12:26:33 +0000 UTC" firstStartedPulling="2025-12-09 12:26:45.802277705 +0000 UTC m=+1218.362758756" lastFinishedPulling="2025-12-09 12:27:04.98308957 +0000 UTC m=+1237.543570621" observedRunningTime="2025-12-09 12:27:05.342148131 +0000 UTC m=+1237.902629192" watchObservedRunningTime="2025-12-09 12:27:05.349465619 +0000 UTC m=+1237.909946660" Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.614285 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.614365 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 09 12:27:05 crc kubenswrapper[4970]: I1209 12:27:05.653970 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.306268 4970 generic.go:334] "Generic (PLEG): container finished" podID="d67f963f-36c2-4056-8b35-5a08e547ba33" containerID="a37531fcd3e3a7dd6bb6fa0087f9630e1a245ab95e9b6d821ef0ea66c926fc6a" exitCode=0 Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.307104 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d67f963f-36c2-4056-8b35-5a08e547ba33","Type":"ContainerDied","Data":"a37531fcd3e3a7dd6bb6fa0087f9630e1a245ab95e9b6d821ef0ea66c926fc6a"} Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.310304 4970 generic.go:334] "Generic (PLEG): container finished" podID="efb35edd-0684-4604-87bb-66e26970a864" containerID="c743992463e8cdbc678a09dc7724ef0756da718a5cb8603ec9f312ad6a7ad9b7" exitCode=0 Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.311270 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"efb35edd-0684-4604-87bb-66e26970a864","Type":"ContainerDied","Data":"c743992463e8cdbc678a09dc7724ef0756da718a5cb8603ec9f312ad6a7ad9b7"} Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.361474 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.614411 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dv56r"] Dec 09 12:27:06 crc kubenswrapper[4970]: E1209 12:27:06.615016 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="dnsmasq-dns" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.615031 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="dnsmasq-dns" Dec 09 12:27:06 crc kubenswrapper[4970]: E1209 12:27:06.615044 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="init" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.615050 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="init" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.615235 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="af131622-dd81-4dc0-9ea8-101b02b2aad8" containerName="dnsmasq-dns" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.616191 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.620844 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.635213 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dv56r"] Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.716668 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.716738 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.717111 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-config\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.717341 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngrsx\" (UniqueName: \"kubernetes.io/projected/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-kube-api-access-ngrsx\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.820387 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-config\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.820549 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngrsx\" (UniqueName: \"kubernetes.io/projected/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-kube-api-access-ngrsx\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.821287 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-config\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.822131 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.822212 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.822369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.823098 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.829293 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bcbw5"] Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.830636 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.832641 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.855030 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngrsx\" (UniqueName: \"kubernetes.io/projected/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-kube-api-access-ngrsx\") pod \"dnsmasq-dns-7fd796d7df-dv56r\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.861100 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bcbw5"] Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.923846 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/337b2685-e8d8-4124-b1f6-d952d8939fb2-kube-api-access-6v245\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.923885 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/337b2685-e8d8-4124-b1f6-d952d8939fb2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.923965 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/337b2685-e8d8-4124-b1f6-d952d8939fb2-ovn-rundir\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.924143 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/337b2685-e8d8-4124-b1f6-d952d8939fb2-ovs-rundir\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.924175 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337b2685-e8d8-4124-b1f6-d952d8939fb2-config\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.924210 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337b2685-e8d8-4124-b1f6-d952d8939fb2-combined-ca-bundle\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:06 crc kubenswrapper[4970]: I1209 12:27:06.936812 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.010417 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dv56r"] Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.027201 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/337b2685-e8d8-4124-b1f6-d952d8939fb2-ovn-rundir\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.027391 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/337b2685-e8d8-4124-b1f6-d952d8939fb2-ovs-rundir\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.027428 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337b2685-e8d8-4124-b1f6-d952d8939fb2-config\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.027463 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337b2685-e8d8-4124-b1f6-d952d8939fb2-combined-ca-bundle\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.027513 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/337b2685-e8d8-4124-b1f6-d952d8939fb2-kube-api-access-6v245\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.027538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/337b2685-e8d8-4124-b1f6-d952d8939fb2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.028544 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/337b2685-e8d8-4124-b1f6-d952d8939fb2-config\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.028850 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/337b2685-e8d8-4124-b1f6-d952d8939fb2-ovn-rundir\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.028876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/337b2685-e8d8-4124-b1f6-d952d8939fb2-ovs-rundir\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.033499 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/337b2685-e8d8-4124-b1f6-d952d8939fb2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.034190 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/337b2685-e8d8-4124-b1f6-d952d8939fb2-combined-ca-bundle\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.048231 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v245\" (UniqueName: \"kubernetes.io/projected/337b2685-e8d8-4124-b1f6-d952d8939fb2-kube-api-access-6v245\") pod \"ovn-controller-metrics-bcbw5\" (UID: \"337b2685-e8d8-4124-b1f6-d952d8939fb2\") " pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.048581 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gplst"] Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.051095 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.057102 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.062039 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gplst"] Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.158714 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bcbw5" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.235376 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.235447 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.235476 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qnw\" (UniqueName: \"kubernetes.io/projected/41231f9e-fef3-4e77-8f93-98224df06d8a-kube-api-access-t7qnw\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.235497 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-config\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.235533 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.339929 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.340060 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.340098 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qnw\" (UniqueName: \"kubernetes.io/projected/41231f9e-fef3-4e77-8f93-98224df06d8a-kube-api-access-t7qnw\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.340125 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-config\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.340188 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.341239 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.341896 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.342611 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.343062 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-config\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.348908 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d67f963f-36c2-4056-8b35-5a08e547ba33","Type":"ContainerStarted","Data":"0b76e539c9467d803720a1152f4028e1448b8fe59fe7ab2031f8726c0226934e"} Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.371429 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"efb35edd-0684-4604-87bb-66e26970a864","Type":"ContainerStarted","Data":"15db490b621293bafd9b1c3f92107ade7fa8c93c1835060cf3a7ac853a1a388a"} Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.412498 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qnw\" (UniqueName: \"kubernetes.io/projected/41231f9e-fef3-4e77-8f93-98224df06d8a-kube-api-access-t7qnw\") pod \"dnsmasq-dns-86db49b7ff-gplst\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.438736 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.441811 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.583872162 podStartE2EDuration="43.441788791s" podCreationTimestamp="2025-12-09 12:26:24 +0000 UTC" firstStartedPulling="2025-12-09 12:26:43.651666685 +0000 UTC m=+1216.212147736" lastFinishedPulling="2025-12-09 12:26:59.509583314 +0000 UTC m=+1232.070064365" observedRunningTime="2025-12-09 12:27:07.406437384 +0000 UTC m=+1239.966918435" watchObservedRunningTime="2025-12-09 12:27:07.441788791 +0000 UTC m=+1240.002269842" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.483817 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=27.649413955 podStartE2EDuration="44.483797798s" podCreationTimestamp="2025-12-09 12:26:23 +0000 UTC" firstStartedPulling="2025-12-09 12:26:43.643600387 +0000 UTC m=+1216.204081438" lastFinishedPulling="2025-12-09 12:27:00.47798423 +0000 UTC m=+1233.038465281" observedRunningTime="2025-12-09 12:27:07.42809952 +0000 UTC m=+1239.988580571" watchObservedRunningTime="2025-12-09 12:27:07.483797798 +0000 UTC m=+1240.044278849" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.598669 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dv56r"] Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.748938 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.763118 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bcbw5"] Dec 09 12:27:07 crc kubenswrapper[4970]: W1209 12:27:07.778671 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod337b2685_e8d8_4124_b1f6_d952d8939fb2.slice/crio-f680dd0666cca86bc5412628a47deed8223c6d373cb08cacbfab917d47521051 WatchSource:0}: Error finding container f680dd0666cca86bc5412628a47deed8223c6d373cb08cacbfab917d47521051: Status 404 returned error can't find the container with id f680dd0666cca86bc5412628a47deed8223c6d373cb08cacbfab917d47521051 Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.804815 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 09 12:27:07 crc kubenswrapper[4970]: I1209 12:27:07.911461 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gplst"] Dec 09 12:27:07 crc kubenswrapper[4970]: W1209 12:27:07.920872 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41231f9e_fef3_4e77_8f93_98224df06d8a.slice/crio-c7287b92d042e656dec3f6c0dcbe3dcd848e8ec7e10ee9beabbcb7cdb59da858 WatchSource:0}: Error finding container c7287b92d042e656dec3f6c0dcbe3dcd848e8ec7e10ee9beabbcb7cdb59da858: Status 404 returned error can't find the container with id c7287b92d042e656dec3f6c0dcbe3dcd848e8ec7e10ee9beabbcb7cdb59da858 Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.384164 4970 generic.go:334] "Generic (PLEG): container finished" podID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerID="95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0" exitCode=0 Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.384308 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" event={"ID":"41231f9e-fef3-4e77-8f93-98224df06d8a","Type":"ContainerDied","Data":"95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0"} Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.385376 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" event={"ID":"41231f9e-fef3-4e77-8f93-98224df06d8a","Type":"ContainerStarted","Data":"c7287b92d042e656dec3f6c0dcbe3dcd848e8ec7e10ee9beabbcb7cdb59da858"} Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.388332 4970 generic.go:334] "Generic (PLEG): container finished" podID="5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" containerID="dbd878fdcb4a9da346b3ffc2a10a11764fdb00780dd48488a56dafa80468ea7c" exitCode=0 Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.388401 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" event={"ID":"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d","Type":"ContainerDied","Data":"dbd878fdcb4a9da346b3ffc2a10a11764fdb00780dd48488a56dafa80468ea7c"} Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.388432 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" event={"ID":"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d","Type":"ContainerStarted","Data":"89057a87e3e1a9e4f1354233114c28553dbee4e90f88a7eca2815f5421e0cc66"} Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.391938 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bcbw5" event={"ID":"337b2685-e8d8-4124-b1f6-d952d8939fb2","Type":"ContainerStarted","Data":"28fc8a7fc57b003a3808e9622d1ac2f2869d07800b12423f24da54a5cdbc89c0"} Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.391975 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bcbw5" event={"ID":"337b2685-e8d8-4124-b1f6-d952d8939fb2","Type":"ContainerStarted","Data":"f680dd0666cca86bc5412628a47deed8223c6d373cb08cacbfab917d47521051"} Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.391989 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.427380 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bcbw5" podStartSLOduration=2.427363212 podStartE2EDuration="2.427363212s" podCreationTimestamp="2025-12-09 12:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:08.423565119 +0000 UTC m=+1240.984046170" watchObservedRunningTime="2025-12-09 12:27:08.427363212 +0000 UTC m=+1240.987844263" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.486559 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.756671 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.759029 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.761996 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.764795 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.765067 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.765780 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-8knc4" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.769004 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.892959 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.895056 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2076cff1-9fb3-45f4-99db-d2aa56cafc96-config\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.895455 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.895494 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkhhf\" (UniqueName: \"kubernetes.io/projected/2076cff1-9fb3-45f4-99db-d2aa56cafc96-kube-api-access-qkhhf\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.895522 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2076cff1-9fb3-45f4-99db-d2aa56cafc96-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.895688 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2076cff1-9fb3-45f4-99db-d2aa56cafc96-scripts\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.895723 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.915795 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.997817 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngrsx\" (UniqueName: \"kubernetes.io/projected/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-kube-api-access-ngrsx\") pod \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.997947 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-ovsdbserver-nb\") pod \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998061 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-dns-svc\") pod \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998131 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-config\") pod \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\" (UID: \"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d\") " Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998427 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2076cff1-9fb3-45f4-99db-d2aa56cafc96-scripts\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998460 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998545 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998610 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2076cff1-9fb3-45f4-99db-d2aa56cafc96-config\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998639 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998678 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkhhf\" (UniqueName: \"kubernetes.io/projected/2076cff1-9fb3-45f4-99db-d2aa56cafc96-kube-api-access-qkhhf\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.998703 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2076cff1-9fb3-45f4-99db-d2aa56cafc96-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.999215 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2076cff1-9fb3-45f4-99db-d2aa56cafc96-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.999414 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2076cff1-9fb3-45f4-99db-d2aa56cafc96-scripts\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:08 crc kubenswrapper[4970]: I1209 12:27:08.999505 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2076cff1-9fb3-45f4-99db-d2aa56cafc96-config\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.004567 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.005899 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.007125 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-kube-api-access-ngrsx" (OuterVolumeSpecName: "kube-api-access-ngrsx") pod "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" (UID: "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d"). InnerVolumeSpecName "kube-api-access-ngrsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.009022 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2076cff1-9fb3-45f4-99db-d2aa56cafc96-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.015822 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkhhf\" (UniqueName: \"kubernetes.io/projected/2076cff1-9fb3-45f4-99db-d2aa56cafc96-kube-api-access-qkhhf\") pod \"ovn-northd-0\" (UID: \"2076cff1-9fb3-45f4-99db-d2aa56cafc96\") " pod="openstack/ovn-northd-0" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.024689 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" (UID: "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.025686 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-config" (OuterVolumeSpecName: "config") pod "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" (UID: "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.029118 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" (UID: "5fae3e0e-2515-413f-80a0-8b3bc99e4e5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.101075 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.101121 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngrsx\" (UniqueName: \"kubernetes.io/projected/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-kube-api-access-ngrsx\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.101139 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.101152 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.111469 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.408876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" event={"ID":"41231f9e-fef3-4e77-8f93-98224df06d8a","Type":"ContainerStarted","Data":"d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e"} Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.409599 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.410514 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.411625 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-dv56r" event={"ID":"5fae3e0e-2515-413f-80a0-8b3bc99e4e5d","Type":"ContainerDied","Data":"89057a87e3e1a9e4f1354233114c28553dbee4e90f88a7eca2815f5421e0cc66"} Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.411701 4970 scope.go:117] "RemoveContainer" containerID="dbd878fdcb4a9da346b3ffc2a10a11764fdb00780dd48488a56dafa80468ea7c" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.437741 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" podStartSLOduration=2.437721494 podStartE2EDuration="2.437721494s" podCreationTimestamp="2025-12-09 12:27:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:09.436471861 +0000 UTC m=+1241.996952942" watchObservedRunningTime="2025-12-09 12:27:09.437721494 +0000 UTC m=+1241.998202545" Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.531835 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dv56r"] Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.545669 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-dv56r"] Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.658781 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 12:27:09 crc kubenswrapper[4970]: I1209 12:27:09.823236 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" path="/var/lib/kubelet/pods/5fae3e0e-2515-413f-80a0-8b3bc99e4e5d/volumes" Dec 09 12:27:10 crc kubenswrapper[4970]: I1209 12:27:10.420418 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2076cff1-9fb3-45f4-99db-d2aa56cafc96","Type":"ContainerStarted","Data":"cebd124afc8c40d5429b42ca0f73bcef44f18c8aa3d4cb60a3484ea3208e9aea"} Dec 09 12:27:10 crc kubenswrapper[4970]: E1209 12:27:10.928569 4970 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.245:33640->38.102.83.245:35377: write tcp 38.102.83.245:33640->38.102.83.245:35377: write: connection reset by peer Dec 09 12:27:10 crc kubenswrapper[4970]: I1209 12:27:10.987461 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 09 12:27:11 crc kubenswrapper[4970]: I1209 12:27:11.432971 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2076cff1-9fb3-45f4-99db-d2aa56cafc96","Type":"ContainerStarted","Data":"358d97cc0294c00ffb3fd2e513dd88779a8394665df03544c1f6a889248f8ab3"} Dec 09 12:27:11 crc kubenswrapper[4970]: I1209 12:27:11.433603 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 09 12:27:11 crc kubenswrapper[4970]: I1209 12:27:11.433637 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"2076cff1-9fb3-45f4-99db-d2aa56cafc96","Type":"ContainerStarted","Data":"468c166b10ec04e022ce804f4c2bbd577c2ded385de877305fc75f8916f73cbb"} Dec 09 12:27:11 crc kubenswrapper[4970]: I1209 12:27:11.434557 4970 generic.go:334] "Generic (PLEG): container finished" podID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerID="c06514286bb111c9e0be623a217a45c85650f86040dfd0c6d5135e2298a5d7a4" exitCode=0 Dec 09 12:27:11 crc kubenswrapper[4970]: I1209 12:27:11.434593 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerDied","Data":"c06514286bb111c9e0be623a217a45c85650f86040dfd0c6d5135e2298a5d7a4"} Dec 09 12:27:11 crc kubenswrapper[4970]: I1209 12:27:11.472839 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.116252833 podStartE2EDuration="3.472817757s" podCreationTimestamp="2025-12-09 12:27:08 +0000 UTC" firstStartedPulling="2025-12-09 12:27:09.664324149 +0000 UTC m=+1242.224805200" lastFinishedPulling="2025-12-09 12:27:11.020889063 +0000 UTC m=+1243.581370124" observedRunningTime="2025-12-09 12:27:11.449586038 +0000 UTC m=+1244.010067089" watchObservedRunningTime="2025-12-09 12:27:11.472817757 +0000 UTC m=+1244.033298808" Dec 09 12:27:14 crc kubenswrapper[4970]: I1209 12:27:14.464293 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqfv8" event={"ID":"c06ee73b-4168-4ef3-b268-db5e976febbf","Type":"ContainerStarted","Data":"7205b11b9354c366640e0ace3c6d1c8239f1a63d47e60902fb3ade517a525b6f"} Dec 09 12:27:14 crc kubenswrapper[4970]: I1209 12:27:14.464981 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-hqfv8" Dec 09 12:27:14 crc kubenswrapper[4970]: I1209 12:27:14.482970 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-hqfv8" podStartSLOduration=13.949619013 podStartE2EDuration="43.482954867s" podCreationTimestamp="2025-12-09 12:26:31 +0000 UTC" firstStartedPulling="2025-12-09 12:26:43.755549317 +0000 UTC m=+1216.316030368" lastFinishedPulling="2025-12-09 12:27:13.288885171 +0000 UTC m=+1245.849366222" observedRunningTime="2025-12-09 12:27:14.481754094 +0000 UTC m=+1247.042235165" watchObservedRunningTime="2025-12-09 12:27:14.482954867 +0000 UTC m=+1247.043435918" Dec 09 12:27:14 crc kubenswrapper[4970]: I1209 12:27:14.800015 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 09 12:27:14 crc kubenswrapper[4970]: I1209 12:27:14.800368 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 09 12:27:14 crc kubenswrapper[4970]: I1209 12:27:14.920717 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.252706 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-598d8f6f5c-75bbn" podUID="a7013e9d-6527-41d3-a311-2f21b4961dfa" containerName="console" containerID="cri-o://27df706cc720744990802816011bf9ab2c90f3299d7f2bfe7080a5342e6295d7" gracePeriod=15 Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.480616 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-598d8f6f5c-75bbn_a7013e9d-6527-41d3-a311-2f21b4961dfa/console/0.log" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.480682 4970 generic.go:334] "Generic (PLEG): container finished" podID="a7013e9d-6527-41d3-a311-2f21b4961dfa" containerID="27df706cc720744990802816011bf9ab2c90f3299d7f2bfe7080a5342e6295d7" exitCode=2 Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.480781 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-598d8f6f5c-75bbn" event={"ID":"a7013e9d-6527-41d3-a311-2f21b4961dfa","Type":"ContainerDied","Data":"27df706cc720744990802816011bf9ab2c90f3299d7f2bfe7080a5342e6295d7"} Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.562282 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.855181 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.855289 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.972184 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7217-account-create-update-zcxd7"] Dec 09 12:27:15 crc kubenswrapper[4970]: E1209 12:27:15.974393 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" containerName="init" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.974422 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" containerName="init" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.974699 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fae3e0e-2515-413f-80a0-8b3bc99e4e5d" containerName="init" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.975603 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.987613 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 09 12:27:15 crc kubenswrapper[4970]: I1209 12:27:15.989594 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7217-account-create-update-zcxd7"] Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.024273 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-hmlhf"] Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.032268 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.066376 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hmlhf"] Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.086556 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dl2\" (UniqueName: \"kubernetes.io/projected/617712c7-90b6-478c-a60a-0d830b8582ab-kube-api-access-n7dl2\") pod \"placement-7217-account-create-update-zcxd7\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.086638 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/252ca514-15c5-480d-a81e-d8230171c857-operator-scripts\") pod \"placement-db-create-hmlhf\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.086683 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617712c7-90b6-478c-a60a-0d830b8582ab-operator-scripts\") pod \"placement-7217-account-create-update-zcxd7\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.086744 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftnn8\" (UniqueName: \"kubernetes.io/projected/252ca514-15c5-480d-a81e-d8230171c857-kube-api-access-ftnn8\") pod \"placement-db-create-hmlhf\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.187987 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.188593 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftnn8\" (UniqueName: \"kubernetes.io/projected/252ca514-15c5-480d-a81e-d8230171c857-kube-api-access-ftnn8\") pod \"placement-db-create-hmlhf\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.188729 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dl2\" (UniqueName: \"kubernetes.io/projected/617712c7-90b6-478c-a60a-0d830b8582ab-kube-api-access-n7dl2\") pod \"placement-7217-account-create-update-zcxd7\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.188808 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/252ca514-15c5-480d-a81e-d8230171c857-operator-scripts\") pod \"placement-db-create-hmlhf\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.188872 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617712c7-90b6-478c-a60a-0d830b8582ab-operator-scripts\") pod \"placement-7217-account-create-update-zcxd7\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.189795 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/252ca514-15c5-480d-a81e-d8230171c857-operator-scripts\") pod \"placement-db-create-hmlhf\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.190035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617712c7-90b6-478c-a60a-0d830b8582ab-operator-scripts\") pod \"placement-7217-account-create-update-zcxd7\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.212843 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftnn8\" (UniqueName: \"kubernetes.io/projected/252ca514-15c5-480d-a81e-d8230171c857-kube-api-access-ftnn8\") pod \"placement-db-create-hmlhf\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.213116 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dl2\" (UniqueName: \"kubernetes.io/projected/617712c7-90b6-478c-a60a-0d830b8582ab-kube-api-access-n7dl2\") pod \"placement-7217-account-create-update-zcxd7\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.302016 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.399562 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.581818 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.981916 4970 patch_prober.go:28] interesting pod/console-598d8f6f5c-75bbn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.89:8443/health\": dial tcp 10.217.0.89:8443: connect: connection refused" start-of-body= Dec 09 12:27:16 crc kubenswrapper[4970]: I1209 12:27:16.982290 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-598d8f6f5c-75bbn" podUID="a7013e9d-6527-41d3-a311-2f21b4961dfa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.89:8443/health\": dial tcp 10.217.0.89:8443: connect: connection refused" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.442511 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.498178 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8hwcd"] Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.499017 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerName="dnsmasq-dns" containerID="cri-o://76b92d99145deda0258815c9079ee715e1fe4e59052e3f76a363608ad200b9bd" gracePeriod=10 Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.510581 4970 generic.go:334] "Generic (PLEG): container finished" podID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerID="e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5" exitCode=0 Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.510671 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b822b3c-bdfc-4766-b56f-14696c6b34a0","Type":"ContainerDied","Data":"e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5"} Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.516142 4970 generic.go:334] "Generic (PLEG): container finished" podID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerID="eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77" exitCode=0 Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.516406 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd722f79-8e7d-46eb-b8e2-6da28c0dead2","Type":"ContainerDied","Data":"eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77"} Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.727088 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-hmlhf"] Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.822098 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-598d8f6f5c-75bbn_a7013e9d-6527-41d3-a311-2f21b4961dfa/console/0.log" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.823543 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.847544 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-ntn5d"] Dec 09 12:27:17 crc kubenswrapper[4970]: E1209 12:27:17.847920 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7013e9d-6527-41d3-a311-2f21b4961dfa" containerName="console" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.855260 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7013e9d-6527-41d3-a311-2f21b4961dfa" containerName="console" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.855762 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7013e9d-6527-41d3-a311-2f21b4961dfa" containerName="console" Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.856623 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-ntn5d"] Dec 09 12:27:17 crc kubenswrapper[4970]: I1209 12:27:17.856720 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.911631 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7217-account-create-update-zcxd7"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938256 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-oauth-config\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938431 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-service-ca\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938470 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-oauth-serving-cert\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938498 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-trusted-ca-bundle\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938582 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4mjg\" (UniqueName: \"kubernetes.io/projected/a7013e9d-6527-41d3-a311-2f21b4961dfa-kube-api-access-z4mjg\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938635 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-serving-cert\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.938668 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-config\") pod \"a7013e9d-6527-41d3-a311-2f21b4961dfa\" (UID: \"a7013e9d-6527-41d3-a311-2f21b4961dfa\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.939325 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqsn\" (UniqueName: \"kubernetes.io/projected/144957ba-384e-40b0-88d5-17afeaaf3795-kube-api-access-vhqsn\") pod \"mysqld-exporter-openstack-db-create-ntn5d\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.939416 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144957ba-384e-40b0-88d5-17afeaaf3795-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-ntn5d\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.939592 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.940039 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-service-ca" (OuterVolumeSpecName: "service-ca") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.940610 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-config" (OuterVolumeSpecName: "console-config") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.941607 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.945398 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7013e9d-6527-41d3-a311-2f21b4961dfa-kube-api-access-z4mjg" (OuterVolumeSpecName: "kube-api-access-z4mjg") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "kube-api-access-z4mjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.945680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:17.952399 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a7013e9d-6527-41d3-a311-2f21b4961dfa" (UID: "a7013e9d-6527-41d3-a311-2f21b4961dfa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044186 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqsn\" (UniqueName: \"kubernetes.io/projected/144957ba-384e-40b0-88d5-17afeaaf3795-kube-api-access-vhqsn\") pod \"mysqld-exporter-openstack-db-create-ntn5d\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044300 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144957ba-384e-40b0-88d5-17afeaaf3795-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-ntn5d\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044662 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044680 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044690 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044731 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044740 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4mjg\" (UniqueName: \"kubernetes.io/projected/a7013e9d-6527-41d3-a311-2f21b4961dfa-kube-api-access-z4mjg\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044750 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.044757 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7013e9d-6527-41d3-a311-2f21b4961dfa-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.051704 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144957ba-384e-40b0-88d5-17afeaaf3795-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-ntn5d\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.068462 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqsn\" (UniqueName: \"kubernetes.io/projected/144957ba-384e-40b0-88d5-17afeaaf3795-kube-api-access-vhqsn\") pod \"mysqld-exporter-openstack-db-create-ntn5d\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.084281 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-g9rp9"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.086282 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.103457 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-g9rp9"] Dec 09 12:27:19 crc kubenswrapper[4970]: W1209 12:27:18.132185 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod617712c7_90b6_478c_a60a_0d830b8582ab.slice/crio-ce57569ae56782267ca5d56a8c8672c23e329c51497e74d9aa4edae7ee6b6a4f WatchSource:0}: Error finding container ce57569ae56782267ca5d56a8c8672c23e329c51497e74d9aa4edae7ee6b6a4f: Status 404 returned error can't find the container with id ce57569ae56782267ca5d56a8c8672c23e329c51497e74d9aa4edae7ee6b6a4f Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.147237 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.147377 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.147401 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-dns-svc\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.147463 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7hxc\" (UniqueName: \"kubernetes.io/projected/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-kube-api-access-z7hxc\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.147506 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-config\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.184152 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-9c61-account-create-update-q6929"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.185668 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.187994 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.198723 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9c61-account-create-update-q6929"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.219520 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249503 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249558 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sm5n\" (UniqueName: \"kubernetes.io/projected/7006e6ae-7748-4192-9001-3c29b208e763-kube-api-access-2sm5n\") pod \"mysqld-exporter-9c61-account-create-update-q6929\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249663 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249693 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-dns-svc\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7hxc\" (UniqueName: \"kubernetes.io/projected/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-kube-api-access-z7hxc\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249781 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-config\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.249836 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7006e6ae-7748-4192-9001-3c29b208e763-operator-scripts\") pod \"mysqld-exporter-9c61-account-create-update-q6929\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.250585 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.251078 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-dns-svc\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.251180 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.254652 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-config\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.280435 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7hxc\" (UniqueName: \"kubernetes.io/projected/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-kube-api-access-z7hxc\") pod \"dnsmasq-dns-698758b865-g9rp9\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.352224 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7006e6ae-7748-4192-9001-3c29b208e763-operator-scripts\") pod \"mysqld-exporter-9c61-account-create-update-q6929\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.352594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sm5n\" (UniqueName: \"kubernetes.io/projected/7006e6ae-7748-4192-9001-3c29b208e763-kube-api-access-2sm5n\") pod \"mysqld-exporter-9c61-account-create-update-q6929\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.353822 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7006e6ae-7748-4192-9001-3c29b208e763-operator-scripts\") pod \"mysqld-exporter-9c61-account-create-update-q6929\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.369949 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sm5n\" (UniqueName: \"kubernetes.io/projected/7006e6ae-7748-4192-9001-3c29b208e763-kube-api-access-2sm5n\") pod \"mysqld-exporter-9c61-account-create-update-q6929\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.433348 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.528772 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-598d8f6f5c-75bbn_a7013e9d-6527-41d3-a311-2f21b4961dfa/console/0.log" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.528903 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-598d8f6f5c-75bbn" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.528901 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-598d8f6f5c-75bbn" event={"ID":"a7013e9d-6527-41d3-a311-2f21b4961dfa","Type":"ContainerDied","Data":"2febd0765eeed11c767d12f8a91e859eae7fac3940ad113f56d5594163e93f10"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.529827 4970 scope.go:117] "RemoveContainer" containerID="27df706cc720744990802816011bf9ab2c90f3299d7f2bfe7080a5342e6295d7" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.531261 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.533842 4970 generic.go:334] "Generic (PLEG): container finished" podID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerID="76b92d99145deda0258815c9079ee715e1fe4e59052e3f76a363608ad200b9bd" exitCode=0 Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.533936 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" event={"ID":"95bba560-2a79-402a-ade3-e284f9c2a6e6","Type":"ContainerDied","Data":"76b92d99145deda0258815c9079ee715e1fe4e59052e3f76a363608ad200b9bd"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.535541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7217-account-create-update-zcxd7" event={"ID":"617712c7-90b6-478c-a60a-0d830b8582ab","Type":"ContainerStarted","Data":"ce57569ae56782267ca5d56a8c8672c23e329c51497e74d9aa4edae7ee6b6a4f"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.537078 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hmlhf" event={"ID":"252ca514-15c5-480d-a81e-d8230171c857","Type":"ContainerStarted","Data":"c990dc54b06095986f5a208ecd70a8dfecbea33a933ffe1f5c34e1719f1b80fa"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.571006 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-598d8f6f5c-75bbn"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:18.587009 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-598d8f6f5c-75bbn"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.152753 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.166920 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.174021 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.174195 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.174264 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.174292 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-vhvbd" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.194846 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.270832 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bee29a58-7867-4543-bb4e-c19528625b1a-lock\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.270962 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.271001 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrs6k\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-kube-api-access-nrs6k\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.271062 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.271121 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bee29a58-7867-4543-bb4e-c19528625b1a-cache\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.295572 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.373628 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.373712 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bee29a58-7867-4543-bb4e-c19528625b1a-cache\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: E1209 12:27:19.373831 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:27:19 crc kubenswrapper[4970]: E1209 12:27:19.373858 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:27:19 crc kubenswrapper[4970]: E1209 12:27:19.373913 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift podName:bee29a58-7867-4543-bb4e-c19528625b1a nodeName:}" failed. No retries permitted until 2025-12-09 12:27:19.873890913 +0000 UTC m=+1252.434371964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift") pod "swift-storage-0" (UID: "bee29a58-7867-4543-bb4e-c19528625b1a") : configmap "swift-ring-files" not found Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.373843 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bee29a58-7867-4543-bb4e-c19528625b1a-lock\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.373989 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.374012 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrs6k\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-kube-api-access-nrs6k\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.374341 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bee29a58-7867-4543-bb4e-c19528625b1a-cache\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.374511 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.378081 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/bee29a58-7867-4543-bb4e-c19528625b1a-lock\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.426383 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrs6k\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-kube-api-access-nrs6k\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.438753 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.475010 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-config\") pod \"95bba560-2a79-402a-ade3-e284f9c2a6e6\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.475187 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-dns-svc\") pod \"95bba560-2a79-402a-ade3-e284f9c2a6e6\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.475288 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n96g\" (UniqueName: \"kubernetes.io/projected/95bba560-2a79-402a-ade3-e284f9c2a6e6-kube-api-access-8n96g\") pod \"95bba560-2a79-402a-ade3-e284f9c2a6e6\" (UID: \"95bba560-2a79-402a-ade3-e284f9c2a6e6\") " Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.478416 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95bba560-2a79-402a-ade3-e284f9c2a6e6-kube-api-access-8n96g" (OuterVolumeSpecName: "kube-api-access-8n96g") pod "95bba560-2a79-402a-ade3-e284f9c2a6e6" (UID: "95bba560-2a79-402a-ade3-e284f9c2a6e6"). InnerVolumeSpecName "kube-api-access-8n96g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.541279 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-config" (OuterVolumeSpecName: "config") pod "95bba560-2a79-402a-ade3-e284f9c2a6e6" (UID: "95bba560-2a79-402a-ade3-e284f9c2a6e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.553906 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" event={"ID":"95bba560-2a79-402a-ade3-e284f9c2a6e6","Type":"ContainerDied","Data":"8b958f237f3fc9e450065dd43dda198fb68a7719beae3cc5024898da6adeeda5"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.554009 4970 scope.go:117] "RemoveContainer" containerID="76b92d99145deda0258815c9079ee715e1fe4e59052e3f76a363608ad200b9bd" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.554168 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-8hwcd" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.559651 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95bba560-2a79-402a-ade3-e284f9c2a6e6" (UID: "95bba560-2a79-402a-ade3-e284f9c2a6e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.562764 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b822b3c-bdfc-4766-b56f-14696c6b34a0","Type":"ContainerStarted","Data":"5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.563991 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.566270 4970 generic.go:334] "Generic (PLEG): container finished" podID="30b05330-faf4-44e1-afee-1c750e234a37" containerID="0ef2d2fe3413d0ff80e11fbd3b924116f68b66f2ae31843adce773e75c209994" exitCode=0 Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.566327 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d9vtq" event={"ID":"30b05330-faf4-44e1-afee-1c750e234a37","Type":"ContainerDied","Data":"0ef2d2fe3413d0ff80e11fbd3b924116f68b66f2ae31843adce773e75c209994"} Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.579540 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.579597 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95bba560-2a79-402a-ade3-e284f9c2a6e6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.579668 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n96g\" (UniqueName: \"kubernetes.io/projected/95bba560-2a79-402a-ade3-e284f9c2a6e6-kube-api-access-8n96g\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.596895 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.722310785 podStartE2EDuration="58.596870889s" podCreationTimestamp="2025-12-09 12:26:21 +0000 UTC" firstStartedPulling="2025-12-09 12:26:23.324433455 +0000 UTC m=+1195.884914506" lastFinishedPulling="2025-12-09 12:26:42.198993559 +0000 UTC m=+1214.759474610" observedRunningTime="2025-12-09 12:27:19.587314161 +0000 UTC m=+1252.147795212" watchObservedRunningTime="2025-12-09 12:27:19.596870889 +0000 UTC m=+1252.157351940" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.824973 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7013e9d-6527-41d3-a311-2f21b4961dfa" path="/var/lib/kubelet/pods/a7013e9d-6527-41d3-a311-2f21b4961dfa/volumes" Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.886727 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:19 crc kubenswrapper[4970]: E1209 12:27:19.886937 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:27:19 crc kubenswrapper[4970]: E1209 12:27:19.887150 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:27:19 crc kubenswrapper[4970]: E1209 12:27:19.887206 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift podName:bee29a58-7867-4543-bb4e-c19528625b1a nodeName:}" failed. No retries permitted until 2025-12-09 12:27:20.887189809 +0000 UTC m=+1253.447670860 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift") pod "swift-storage-0" (UID: "bee29a58-7867-4543-bb4e-c19528625b1a") : configmap "swift-ring-files" not found Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.893509 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8hwcd"] Dec 09 12:27:19 crc kubenswrapper[4970]: I1209 12:27:19.901427 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-8hwcd"] Dec 09 12:27:20 crc kubenswrapper[4970]: I1209 12:27:20.909031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:20 crc kubenswrapper[4970]: E1209 12:27:20.910310 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:27:20 crc kubenswrapper[4970]: E1209 12:27:20.910415 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:27:20 crc kubenswrapper[4970]: E1209 12:27:20.910520 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift podName:bee29a58-7867-4543-bb4e-c19528625b1a nodeName:}" failed. No retries permitted until 2025-12-09 12:27:22.910502632 +0000 UTC m=+1255.470983683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift") pod "swift-storage-0" (UID: "bee29a58-7867-4543-bb4e-c19528625b1a") : configmap "swift-ring-files" not found Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.159030 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c6rs9"] Dec 09 12:27:21 crc kubenswrapper[4970]: E1209 12:27:21.159800 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerName="dnsmasq-dns" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.159928 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerName="dnsmasq-dns" Dec 09 12:27:21 crc kubenswrapper[4970]: E1209 12:27:21.160022 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerName="init" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.160093 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerName="init" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.160416 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" containerName="dnsmasq-dns" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.161349 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.190328 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c6rs9"] Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.215083 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-operator-scripts\") pod \"glance-db-create-c6rs9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.215218 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49bvz\" (UniqueName: \"kubernetes.io/projected/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-kube-api-access-49bvz\") pod \"glance-db-create-c6rs9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.264377 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0bbf-account-create-update-n9d8q"] Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.266211 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.268297 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.273640 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0bbf-account-create-update-n9d8q"] Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.316781 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65g2\" (UniqueName: \"kubernetes.io/projected/816b8c67-0a22-47ae-a457-28928814a337-kube-api-access-f65g2\") pod \"glance-0bbf-account-create-update-n9d8q\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.316837 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-operator-scripts\") pod \"glance-db-create-c6rs9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.316889 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49bvz\" (UniqueName: \"kubernetes.io/projected/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-kube-api-access-49bvz\") pod \"glance-db-create-c6rs9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.316959 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816b8c67-0a22-47ae-a457-28928814a337-operator-scripts\") pod \"glance-0bbf-account-create-update-n9d8q\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.317729 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-operator-scripts\") pod \"glance-db-create-c6rs9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.336507 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49bvz\" (UniqueName: \"kubernetes.io/projected/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-kube-api-access-49bvz\") pod \"glance-db-create-c6rs9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.418997 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f65g2\" (UniqueName: \"kubernetes.io/projected/816b8c67-0a22-47ae-a457-28928814a337-kube-api-access-f65g2\") pod \"glance-0bbf-account-create-update-n9d8q\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.419121 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816b8c67-0a22-47ae-a457-28928814a337-operator-scripts\") pod \"glance-0bbf-account-create-update-n9d8q\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.419745 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816b8c67-0a22-47ae-a457-28928814a337-operator-scripts\") pod \"glance-0bbf-account-create-update-n9d8q\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.436933 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f65g2\" (UniqueName: \"kubernetes.io/projected/816b8c67-0a22-47ae-a457-28928814a337-kube-api-access-f65g2\") pod \"glance-0bbf-account-create-update-n9d8q\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.487302 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.587087 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:21 crc kubenswrapper[4970]: I1209 12:27:21.824163 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95bba560-2a79-402a-ade3-e284f9c2a6e6" path="/var/lib/kubelet/pods/95bba560-2a79-402a-ade3-e284f9c2a6e6/volumes" Dec 09 12:27:22 crc kubenswrapper[4970]: I1209 12:27:22.947710 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:22 crc kubenswrapper[4970]: E1209 12:27:22.947957 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:27:22 crc kubenswrapper[4970]: E1209 12:27:22.948135 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:27:22 crc kubenswrapper[4970]: E1209 12:27:22.948212 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift podName:bee29a58-7867-4543-bb4e-c19528625b1a nodeName:}" failed. No retries permitted until 2025-12-09 12:27:26.948192015 +0000 UTC m=+1259.508673076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift") pod "swift-storage-0" (UID: "bee29a58-7867-4543-bb4e-c19528625b1a") : configmap "swift-ring-files" not found Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.141828 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kvgg9"] Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.143322 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.146568 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.146595 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.146605 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.199012 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kvgg9"] Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.260212 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-ring-data-devices\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.260616 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/5849599a-f7e9-4ea2-982c-5388be3d7e8d-kube-api-access-fcp8s\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.260738 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-combined-ca-bundle\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.260972 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-scripts\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.261035 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5849599a-f7e9-4ea2-982c-5388be3d7e8d-etc-swift\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.261061 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-swiftconf\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.261085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-dispersionconf\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.348823 4970 scope.go:117] "RemoveContainer" containerID="3a5a7acd576f01d2bb8419b9741af4b049c8622813081b4eccd4ac763f80981d" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363642 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-scripts\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363696 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5849599a-f7e9-4ea2-982c-5388be3d7e8d-etc-swift\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363719 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-swiftconf\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363740 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-dispersionconf\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363794 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-ring-data-devices\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363814 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/5849599a-f7e9-4ea2-982c-5388be3d7e8d-kube-api-access-fcp8s\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.363864 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-combined-ca-bundle\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.367352 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5849599a-f7e9-4ea2-982c-5388be3d7e8d-etc-swift\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.367378 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-scripts\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.367590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-ring-data-devices\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.372544 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-combined-ca-bundle\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.373233 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-dispersionconf\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.379523 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-swiftconf\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.384603 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/5849599a-f7e9-4ea2-982c-5388be3d7e8d-kube-api-access-fcp8s\") pod \"swift-ring-rebalance-kvgg9\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.528618 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.698768 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9c61-account-create-update-q6929"] Dec 09 12:27:23 crc kubenswrapper[4970]: W1209 12:27:23.726568 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7006e6ae_7748_4192_9001_3c29b208e763.slice/crio-0c8fc91c183085392ef785330e4682f3fe23343021efdb54b5906a92c5f3598f WatchSource:0}: Error finding container 0c8fc91c183085392ef785330e4682f3fe23343021efdb54b5906a92c5f3598f: Status 404 returned error can't find the container with id 0c8fc91c183085392ef785330e4682f3fe23343021efdb54b5906a92c5f3598f Dec 09 12:27:23 crc kubenswrapper[4970]: I1209 12:27:23.982389 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-ntn5d"] Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.131137 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-g9rp9"] Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.199225 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c6rs9"] Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.213416 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.216367 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0bbf-account-create-update-n9d8q"] Dec 09 12:27:24 crc kubenswrapper[4970]: W1209 12:27:24.239410 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod816b8c67_0a22_47ae_a457_28928814a337.slice/crio-618bedabba9b6c8e20bf302848fc9573b96f20c41e5a0f81a88b8d077163e3a6 WatchSource:0}: Error finding container 618bedabba9b6c8e20bf302848fc9573b96f20c41e5a0f81a88b8d077163e3a6: Status 404 returned error can't find the container with id 618bedabba9b6c8e20bf302848fc9573b96f20c41e5a0f81a88b8d077163e3a6 Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.382969 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kvgg9"] Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.636518 4970 generic.go:334] "Generic (PLEG): container finished" podID="252ca514-15c5-480d-a81e-d8230171c857" containerID="add168fb3bf2e17ae7a79cf9c1335e131e30cef7b064a62f6cd08c3762af0dc6" exitCode=0 Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.636633 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hmlhf" event={"ID":"252ca514-15c5-480d-a81e-d8230171c857","Type":"ContainerDied","Data":"add168fb3bf2e17ae7a79cf9c1335e131e30cef7b064a62f6cd08c3762af0dc6"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.640436 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd722f79-8e7d-46eb-b8e2-6da28c0dead2","Type":"ContainerStarted","Data":"4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.640776 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.644487 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0bbf-account-create-update-n9d8q" event={"ID":"816b8c67-0a22-47ae-a457-28928814a337","Type":"ContainerStarted","Data":"618bedabba9b6c8e20bf302848fc9573b96f20c41e5a0f81a88b8d077163e3a6"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.649568 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d9vtq" event={"ID":"30b05330-faf4-44e1-afee-1c750e234a37","Type":"ContainerStarted","Data":"c761d0588ae326610fbd9921ceca13f4d9731aaf5c17855af285247638d5352e"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.660131 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6rs9" event={"ID":"8a787c3f-0ec1-404c-abc0-c57508c7e5b9","Type":"ContainerStarted","Data":"eaacc5dcb8d4fc0c689b867b14501c3f1605447462e75c57465f10001f0cb4c1"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.661845 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kvgg9" event={"ID":"5849599a-f7e9-4ea2-982c-5388be3d7e8d","Type":"ContainerStarted","Data":"85c32b009a19b6d9232d49b0e50e78aac4a2db4990b68383a33c25223d10131d"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.663885 4970 generic.go:334] "Generic (PLEG): container finished" podID="617712c7-90b6-478c-a60a-0d830b8582ab" containerID="7a518d7d526ea14824f1a45242e7daf010d21cd4ac7aa7cbce17137cc19e8adf" exitCode=0 Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.663930 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7217-account-create-update-zcxd7" event={"ID":"617712c7-90b6-478c-a60a-0d830b8582ab","Type":"ContainerDied","Data":"7a518d7d526ea14824f1a45242e7daf010d21cd4ac7aa7cbce17137cc19e8adf"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.665429 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-g9rp9" event={"ID":"b37b8e53-92e0-47e4-a1c4-88e38ee775ff","Type":"ContainerStarted","Data":"469228b93b3b2921c334a5ccc80e03d92ba3131a1eeb172046d87c8850f54dd6"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.667847 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" event={"ID":"582b6a7e-cfed-498e-af7e-f93ffe3ad4bd","Type":"ContainerStarted","Data":"5962830494c5644df01ddfddd5eb9914e0d6f4ed3cfc9e5b67afa4ae7a4e0026"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.670465 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" event={"ID":"144957ba-384e-40b0-88d5-17afeaaf3795","Type":"ContainerStarted","Data":"79b1957bbff6cdd9db7085f8a847c79ad7b66987b1048f116172c8895ce907f5"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.670499 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" event={"ID":"144957ba-384e-40b0-88d5-17afeaaf3795","Type":"ContainerStarted","Data":"805c06d27fae8c07844353b533efc6bc73dff756075d349643b8af9a4f4434c4"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.672717 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerStarted","Data":"f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.674423 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"80219567-cf9b-45cf-9e69-21c871e190dc","Type":"ContainerStarted","Data":"46724580f5c56460935a155d24833ca1e89fa937a361f816facc5ee27ab4aa5a"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.675309 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.676802 4970 generic.go:334] "Generic (PLEG): container finished" podID="7006e6ae-7748-4192-9001-3c29b208e763" containerID="d2df2d6f2561f63b243a7a9237eaf20c55c64327fb1f0c03131161b85cec8182" exitCode=0 Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.676836 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" event={"ID":"7006e6ae-7748-4192-9001-3c29b208e763","Type":"ContainerDied","Data":"d2df2d6f2561f63b243a7a9237eaf20c55c64327fb1f0c03131161b85cec8182"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.676856 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" event={"ID":"7006e6ae-7748-4192-9001-3c29b208e763","Type":"ContainerStarted","Data":"0c8fc91c183085392ef785330e4682f3fe23343021efdb54b5906a92c5f3598f"} Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.679468 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=44.973575127 podStartE2EDuration="1m3.679448413s" podCreationTimestamp="2025-12-09 12:26:21 +0000 UTC" firstStartedPulling="2025-12-09 12:26:23.47493946 +0000 UTC m=+1196.035420511" lastFinishedPulling="2025-12-09 12:26:42.180812746 +0000 UTC m=+1214.741293797" observedRunningTime="2025-12-09 12:27:24.676037041 +0000 UTC m=+1257.236518092" watchObservedRunningTime="2025-12-09 12:27:24.679448413 +0000 UTC m=+1257.239929464" Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.692541 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=17.820231970000002 podStartE2EDuration="57.692523137s" podCreationTimestamp="2025-12-09 12:26:27 +0000 UTC" firstStartedPulling="2025-12-09 12:26:43.65443832 +0000 UTC m=+1216.214919371" lastFinishedPulling="2025-12-09 12:27:23.526729487 +0000 UTC m=+1256.087210538" observedRunningTime="2025-12-09 12:27:24.691002816 +0000 UTC m=+1257.251483867" watchObservedRunningTime="2025-12-09 12:27:24.692523137 +0000 UTC m=+1257.253004188" Dec 09 12:27:24 crc kubenswrapper[4970]: I1209 12:27:24.716186 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-7d5fb4cbfb-826l7" podStartSLOduration=23.093348103 podStartE2EDuration="56.716161077s" podCreationTimestamp="2025-12-09 12:26:28 +0000 UTC" firstStartedPulling="2025-12-09 12:26:43.68730714 +0000 UTC m=+1216.247788191" lastFinishedPulling="2025-12-09 12:27:17.310120114 +0000 UTC m=+1249.870601165" observedRunningTime="2025-12-09 12:27:24.710707539 +0000 UTC m=+1257.271188590" watchObservedRunningTime="2025-12-09 12:27:24.716161077 +0000 UTC m=+1257.276642128" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.596586 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vqnvh"] Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.598395 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.614048 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vqnvh"] Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.697475 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0bbf-account-create-update-n9d8q" event={"ID":"816b8c67-0a22-47ae-a457-28928814a337","Type":"ContainerStarted","Data":"b0c3198748e2b1e64966524283c61ac6803ed4b23106b49d19cfbe04ff1fecea"} Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.701194 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6rs9" event={"ID":"8a787c3f-0ec1-404c-abc0-c57508c7e5b9","Type":"ContainerStarted","Data":"0f5af1b02e6cf04b0bdf9f490b9e4c5225a87adcf9ae4139879d97f844b851a9"} Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.704188 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7d82-account-create-update-sw6fr"] Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.705852 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.708115 4970 generic.go:334] "Generic (PLEG): container finished" podID="144957ba-384e-40b0-88d5-17afeaaf3795" containerID="79b1957bbff6cdd9db7085f8a847c79ad7b66987b1048f116172c8895ce907f5" exitCode=0 Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.708441 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" event={"ID":"144957ba-384e-40b0-88d5-17afeaaf3795","Type":"ContainerDied","Data":"79b1957bbff6cdd9db7085f8a847c79ad7b66987b1048f116172c8895ce907f5"} Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.708604 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.718754 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsf57\" (UniqueName: \"kubernetes.io/projected/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-kube-api-access-wsf57\") pod \"keystone-db-create-vqnvh\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.719026 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-operator-scripts\") pod \"keystone-db-create-vqnvh\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.748386 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d82-account-create-update-sw6fr"] Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.762034 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-0bbf-account-create-update-n9d8q" podStartSLOduration=4.762009609 podStartE2EDuration="4.762009609s" podCreationTimestamp="2025-12-09 12:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:25.717703659 +0000 UTC m=+1258.278184710" watchObservedRunningTime="2025-12-09 12:27:25.762009609 +0000 UTC m=+1258.322490660" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.776937 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-c6rs9" podStartSLOduration=4.776917542 podStartE2EDuration="4.776917542s" podCreationTimestamp="2025-12-09 12:27:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:25.732222232 +0000 UTC m=+1258.292703283" watchObservedRunningTime="2025-12-09 12:27:25.776917542 +0000 UTC m=+1258.337398593" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.821394 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgtnf\" (UniqueName: \"kubernetes.io/projected/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-kube-api-access-fgtnf\") pod \"keystone-7d82-account-create-update-sw6fr\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.821443 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-operator-scripts\") pod \"keystone-db-create-vqnvh\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.822166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-operator-scripts\") pod \"keystone-7d82-account-create-update-sw6fr\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.822289 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-operator-scripts\") pod \"keystone-db-create-vqnvh\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.822810 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsf57\" (UniqueName: \"kubernetes.io/projected/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-kube-api-access-wsf57\") pod \"keystone-db-create-vqnvh\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.849012 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsf57\" (UniqueName: \"kubernetes.io/projected/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-kube-api-access-wsf57\") pod \"keystone-db-create-vqnvh\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.925055 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgtnf\" (UniqueName: \"kubernetes.io/projected/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-kube-api-access-fgtnf\") pod \"keystone-7d82-account-create-update-sw6fr\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.925230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-operator-scripts\") pod \"keystone-7d82-account-create-update-sw6fr\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.925293 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.925916 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-operator-scripts\") pod \"keystone-7d82-account-create-update-sw6fr\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:25 crc kubenswrapper[4970]: I1209 12:27:25.945845 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgtnf\" (UniqueName: \"kubernetes.io/projected/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-kube-api-access-fgtnf\") pod \"keystone-7d82-account-create-update-sw6fr\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.027258 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.382078 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.439862 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7dl2\" (UniqueName: \"kubernetes.io/projected/617712c7-90b6-478c-a60a-0d830b8582ab-kube-api-access-n7dl2\") pod \"617712c7-90b6-478c-a60a-0d830b8582ab\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.440169 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617712c7-90b6-478c-a60a-0d830b8582ab-operator-scripts\") pod \"617712c7-90b6-478c-a60a-0d830b8582ab\" (UID: \"617712c7-90b6-478c-a60a-0d830b8582ab\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.441179 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/617712c7-90b6-478c-a60a-0d830b8582ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "617712c7-90b6-478c-a60a-0d830b8582ab" (UID: "617712c7-90b6-478c-a60a-0d830b8582ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.560805 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/617712c7-90b6-478c-a60a-0d830b8582ab-kube-api-access-n7dl2" (OuterVolumeSpecName: "kube-api-access-n7dl2") pod "617712c7-90b6-478c-a60a-0d830b8582ab" (UID: "617712c7-90b6-478c-a60a-0d830b8582ab"). InnerVolumeSpecName "kube-api-access-n7dl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.561085 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/617712c7-90b6-478c-a60a-0d830b8582ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.607861 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.630578 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.631647 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.661980 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/252ca514-15c5-480d-a81e-d8230171c857-operator-scripts\") pod \"252ca514-15c5-480d-a81e-d8230171c857\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.662137 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftnn8\" (UniqueName: \"kubernetes.io/projected/252ca514-15c5-480d-a81e-d8230171c857-kube-api-access-ftnn8\") pod \"252ca514-15c5-480d-a81e-d8230171c857\" (UID: \"252ca514-15c5-480d-a81e-d8230171c857\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.662441 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/252ca514-15c5-480d-a81e-d8230171c857-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "252ca514-15c5-480d-a81e-d8230171c857" (UID: "252ca514-15c5-480d-a81e-d8230171c857"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.663205 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/252ca514-15c5-480d-a81e-d8230171c857-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.663227 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7dl2\" (UniqueName: \"kubernetes.io/projected/617712c7-90b6-478c-a60a-0d830b8582ab-kube-api-access-n7dl2\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.681471 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vqnvh"] Dec 09 12:27:26 crc kubenswrapper[4970]: W1209 12:27:26.685758 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2b3fb48_ea66_4c6f_92ee_b8cd8b320296.slice/crio-e147737ea01ce157ad8c2504561bb45688849b3d02083f58d650fd9a4ab7764d WatchSource:0}: Error finding container e147737ea01ce157ad8c2504561bb45688849b3d02083f58d650fd9a4ab7764d: Status 404 returned error can't find the container with id e147737ea01ce157ad8c2504561bb45688849b3d02083f58d650fd9a4ab7764d Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.718399 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-d9vtq" event={"ID":"30b05330-faf4-44e1-afee-1c750e234a37","Type":"ContainerStarted","Data":"2b5698245cdb8a29531ed2ec8abf34d067d8a3b4f69ed760c76af318a16567a2"} Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.718455 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.718468 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.719849 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" event={"ID":"7006e6ae-7748-4192-9001-3c29b208e763","Type":"ContainerDied","Data":"0c8fc91c183085392ef785330e4682f3fe23343021efdb54b5906a92c5f3598f"} Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.719870 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c8fc91c183085392ef785330e4682f3fe23343021efdb54b5906a92c5f3598f" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.719912 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9c61-account-create-update-q6929" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.731032 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7217-account-create-update-zcxd7" event={"ID":"617712c7-90b6-478c-a60a-0d830b8582ab","Type":"ContainerDied","Data":"ce57569ae56782267ca5d56a8c8672c23e329c51497e74d9aa4edae7ee6b6a4f"} Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.731120 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce57569ae56782267ca5d56a8c8672c23e329c51497e74d9aa4edae7ee6b6a4f" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.731194 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7217-account-create-update-zcxd7" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.758024 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/252ca514-15c5-480d-a81e-d8230171c857-kube-api-access-ftnn8" (OuterVolumeSpecName: "kube-api-access-ftnn8") pod "252ca514-15c5-480d-a81e-d8230171c857" (UID: "252ca514-15c5-480d-a81e-d8230171c857"). InnerVolumeSpecName "kube-api-access-ftnn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.760815 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-hmlhf" event={"ID":"252ca514-15c5-480d-a81e-d8230171c857","Type":"ContainerDied","Data":"c990dc54b06095986f5a208ecd70a8dfecbea33a933ffe1f5c34e1719f1b80fa"} Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.760854 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c990dc54b06095986f5a208ecd70a8dfecbea33a933ffe1f5c34e1719f1b80fa" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.760931 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-hmlhf" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.766918 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqsn\" (UniqueName: \"kubernetes.io/projected/144957ba-384e-40b0-88d5-17afeaaf3795-kube-api-access-vhqsn\") pod \"144957ba-384e-40b0-88d5-17afeaaf3795\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.767021 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7006e6ae-7748-4192-9001-3c29b208e763-operator-scripts\") pod \"7006e6ae-7748-4192-9001-3c29b208e763\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.767584 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sm5n\" (UniqueName: \"kubernetes.io/projected/7006e6ae-7748-4192-9001-3c29b208e763-kube-api-access-2sm5n\") pod \"7006e6ae-7748-4192-9001-3c29b208e763\" (UID: \"7006e6ae-7748-4192-9001-3c29b208e763\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.767701 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144957ba-384e-40b0-88d5-17afeaaf3795-operator-scripts\") pod \"144957ba-384e-40b0-88d5-17afeaaf3795\" (UID: \"144957ba-384e-40b0-88d5-17afeaaf3795\") " Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.769664 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7006e6ae-7748-4192-9001-3c29b208e763-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7006e6ae-7748-4192-9001-3c29b208e763" (UID: "7006e6ae-7748-4192-9001-3c29b208e763"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.770060 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144957ba-384e-40b0-88d5-17afeaaf3795-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "144957ba-384e-40b0-88d5-17afeaaf3795" (UID: "144957ba-384e-40b0-88d5-17afeaaf3795"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.771528 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7006e6ae-7748-4192-9001-3c29b208e763-kube-api-access-2sm5n" (OuterVolumeSpecName: "kube-api-access-2sm5n") pod "7006e6ae-7748-4192-9001-3c29b208e763" (UID: "7006e6ae-7748-4192-9001-3c29b208e763"). InnerVolumeSpecName "kube-api-access-2sm5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.774016 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144957ba-384e-40b0-88d5-17afeaaf3795-kube-api-access-vhqsn" (OuterVolumeSpecName: "kube-api-access-vhqsn") pod "144957ba-384e-40b0-88d5-17afeaaf3795" (UID: "144957ba-384e-40b0-88d5-17afeaaf3795"). InnerVolumeSpecName "kube-api-access-vhqsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.776366 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhqsn\" (UniqueName: \"kubernetes.io/projected/144957ba-384e-40b0-88d5-17afeaaf3795-kube-api-access-vhqsn\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.776390 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7006e6ae-7748-4192-9001-3c29b208e763-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.776496 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sm5n\" (UniqueName: \"kubernetes.io/projected/7006e6ae-7748-4192-9001-3c29b208e763-kube-api-access-2sm5n\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.776509 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/144957ba-384e-40b0-88d5-17afeaaf3795-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.776524 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftnn8\" (UniqueName: \"kubernetes.io/projected/252ca514-15c5-480d-a81e-d8230171c857-kube-api-access-ftnn8\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.777534 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-d9vtq" podStartSLOduration=22.418640444 podStartE2EDuration="54.777520741s" podCreationTimestamp="2025-12-09 12:26:32 +0000 UTC" firstStartedPulling="2025-12-09 12:26:44.876584556 +0000 UTC m=+1217.437065607" lastFinishedPulling="2025-12-09 12:27:17.235464843 +0000 UTC m=+1249.795945904" observedRunningTime="2025-12-09 12:27:26.750715585 +0000 UTC m=+1259.311196626" watchObservedRunningTime="2025-12-09 12:27:26.777520741 +0000 UTC m=+1259.338001792" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.778544 4970 generic.go:334] "Generic (PLEG): container finished" podID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerID="b5ca57ac5adf3e9a0ce62ad2145c7bc788728874bbb8dcab5322caab6839c85c" exitCode=0 Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.778639 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-g9rp9" event={"ID":"b37b8e53-92e0-47e4-a1c4-88e38ee775ff","Type":"ContainerDied","Data":"b5ca57ac5adf3e9a0ce62ad2145c7bc788728874bbb8dcab5322caab6839c85c"} Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.783589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" event={"ID":"144957ba-384e-40b0-88d5-17afeaaf3795","Type":"ContainerDied","Data":"805c06d27fae8c07844353b533efc6bc73dff756075d349643b8af9a4f4434c4"} Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.783608 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805c06d27fae8c07844353b533efc6bc73dff756075d349643b8af9a4f4434c4" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.783669 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-ntn5d" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.789823 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vqnvh" event={"ID":"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296","Type":"ContainerStarted","Data":"e147737ea01ce157ad8c2504561bb45688849b3d02083f58d650fd9a4ab7764d"} Dec 09 12:27:26 crc kubenswrapper[4970]: E1209 12:27:26.843766 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod144957ba_384e_40b0_88d5_17afeaaf3795.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod252ca514_15c5_480d_a81e_d8230171c857.slice/crio-c990dc54b06095986f5a208ecd70a8dfecbea33a933ffe1f5c34e1719f1b80fa\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod252ca514_15c5_480d_a81e_d8230171c857.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.862617 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7d82-account-create-update-sw6fr"] Dec 09 12:27:26 crc kubenswrapper[4970]: I1209 12:27:26.982749 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:26 crc kubenswrapper[4970]: E1209 12:27:26.983569 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:27:26 crc kubenswrapper[4970]: E1209 12:27:26.983674 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:27:26 crc kubenswrapper[4970]: E1209 12:27:26.983997 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift podName:bee29a58-7867-4543-bb4e-c19528625b1a nodeName:}" failed. No retries permitted until 2025-12-09 12:27:34.983833356 +0000 UTC m=+1267.544314407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift") pod "swift-storage-0" (UID: "bee29a58-7867-4543-bb4e-c19528625b1a") : configmap "swift-ring-files" not found Dec 09 12:27:27 crc kubenswrapper[4970]: I1209 12:27:27.803751 4970 generic.go:334] "Generic (PLEG): container finished" podID="816b8c67-0a22-47ae-a457-28928814a337" containerID="b0c3198748e2b1e64966524283c61ac6803ed4b23106b49d19cfbe04ff1fecea" exitCode=0 Dec 09 12:27:27 crc kubenswrapper[4970]: I1209 12:27:27.804149 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0bbf-account-create-update-n9d8q" event={"ID":"816b8c67-0a22-47ae-a457-28928814a337","Type":"ContainerDied","Data":"b0c3198748e2b1e64966524283c61ac6803ed4b23106b49d19cfbe04ff1fecea"} Dec 09 12:27:27 crc kubenswrapper[4970]: I1209 12:27:27.809874 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerStarted","Data":"e2a51fc274763607afb4ba3d16bb14d65df8303b050c96782e3f97b93d517443"} Dec 09 12:27:27 crc kubenswrapper[4970]: I1209 12:27:27.818225 4970 generic.go:334] "Generic (PLEG): container finished" podID="8a787c3f-0ec1-404c-abc0-c57508c7e5b9" containerID="0f5af1b02e6cf04b0bdf9f490b9e4c5225a87adcf9ae4139879d97f844b851a9" exitCode=0 Dec 09 12:27:27 crc kubenswrapper[4970]: I1209 12:27:27.836192 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6rs9" event={"ID":"8a787c3f-0ec1-404c-abc0-c57508c7e5b9","Type":"ContainerDied","Data":"0f5af1b02e6cf04b0bdf9f490b9e4c5225a87adcf9ae4139879d97f844b851a9"} Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.454454 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fd9db"] Dec 09 12:27:28 crc kubenswrapper[4970]: E1209 12:27:28.455210 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7006e6ae-7748-4192-9001-3c29b208e763" containerName="mariadb-account-create-update" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455235 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7006e6ae-7748-4192-9001-3c29b208e763" containerName="mariadb-account-create-update" Dec 09 12:27:28 crc kubenswrapper[4970]: E1209 12:27:28.455283 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="252ca514-15c5-480d-a81e-d8230171c857" containerName="mariadb-database-create" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455290 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="252ca514-15c5-480d-a81e-d8230171c857" containerName="mariadb-database-create" Dec 09 12:27:28 crc kubenswrapper[4970]: E1209 12:27:28.455301 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144957ba-384e-40b0-88d5-17afeaaf3795" containerName="mariadb-database-create" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455307 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="144957ba-384e-40b0-88d5-17afeaaf3795" containerName="mariadb-database-create" Dec 09 12:27:28 crc kubenswrapper[4970]: E1209 12:27:28.455322 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="617712c7-90b6-478c-a60a-0d830b8582ab" containerName="mariadb-account-create-update" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455328 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="617712c7-90b6-478c-a60a-0d830b8582ab" containerName="mariadb-account-create-update" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455562 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7006e6ae-7748-4192-9001-3c29b208e763" containerName="mariadb-account-create-update" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455581 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="617712c7-90b6-478c-a60a-0d830b8582ab" containerName="mariadb-account-create-update" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455604 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="144957ba-384e-40b0-88d5-17afeaaf3795" containerName="mariadb-database-create" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.455616 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="252ca514-15c5-480d-a81e-d8230171c857" containerName="mariadb-database-create" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.456339 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.468480 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fd9db"] Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.510337 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9350a1c-3918-4895-8383-f7c306cb6063-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fd9db\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.510487 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgfj9\" (UniqueName: \"kubernetes.io/projected/c9350a1c-3918-4895-8383-f7c306cb6063-kube-api-access-tgfj9\") pod \"mysqld-exporter-openstack-cell1-db-create-fd9db\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.612525 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9350a1c-3918-4895-8383-f7c306cb6063-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fd9db\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.612635 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgfj9\" (UniqueName: \"kubernetes.io/projected/c9350a1c-3918-4895-8383-f7c306cb6063-kube-api-access-tgfj9\") pod \"mysqld-exporter-openstack-cell1-db-create-fd9db\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.613904 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9350a1c-3918-4895-8383-f7c306cb6063-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fd9db\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.641192 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgfj9\" (UniqueName: \"kubernetes.io/projected/c9350a1c-3918-4895-8383-f7c306cb6063-kube-api-access-tgfj9\") pod \"mysqld-exporter-openstack-cell1-db-create-fd9db\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.680723 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-375f-account-create-update-pbtc9"] Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.682511 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.685078 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.691565 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-375f-account-create-update-pbtc9"] Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.714536 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftncg\" (UniqueName: \"kubernetes.io/projected/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-kube-api-access-ftncg\") pod \"mysqld-exporter-375f-account-create-update-pbtc9\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.714595 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-operator-scripts\") pod \"mysqld-exporter-375f-account-create-update-pbtc9\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.778415 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.817216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-operator-scripts\") pod \"mysqld-exporter-375f-account-create-update-pbtc9\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.817274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftncg\" (UniqueName: \"kubernetes.io/projected/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-kube-api-access-ftncg\") pod \"mysqld-exporter-375f-account-create-update-pbtc9\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.818173 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-operator-scripts\") pod \"mysqld-exporter-375f-account-create-update-pbtc9\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:28 crc kubenswrapper[4970]: I1209 12:27:28.841716 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftncg\" (UniqueName: \"kubernetes.io/projected/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-kube-api-access-ftncg\") pod \"mysqld-exporter-375f-account-create-update-pbtc9\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.021782 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:29 crc kubenswrapper[4970]: W1209 12:27:29.326638 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9c45310_fd46_4964_8cfe_e6ab7a1e1971.slice/crio-4b01956575d238701ecb9a9038e6498c0153b369ed7922e5d7cfd0d5035a371c WatchSource:0}: Error finding container 4b01956575d238701ecb9a9038e6498c0153b369ed7922e5d7cfd0d5035a371c: Status 404 returned error can't find the container with id 4b01956575d238701ecb9a9038e6498c0153b369ed7922e5d7cfd0d5035a371c Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.378796 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.508981 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.532279 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f65g2\" (UniqueName: \"kubernetes.io/projected/816b8c67-0a22-47ae-a457-28928814a337-kube-api-access-f65g2\") pod \"816b8c67-0a22-47ae-a457-28928814a337\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.532573 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816b8c67-0a22-47ae-a457-28928814a337-operator-scripts\") pod \"816b8c67-0a22-47ae-a457-28928814a337\" (UID: \"816b8c67-0a22-47ae-a457-28928814a337\") " Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.542878 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/816b8c67-0a22-47ae-a457-28928814a337-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "816b8c67-0a22-47ae-a457-28928814a337" (UID: "816b8c67-0a22-47ae-a457-28928814a337"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.543101 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816b8c67-0a22-47ae-a457-28928814a337-kube-api-access-f65g2" (OuterVolumeSpecName: "kube-api-access-f65g2") pod "816b8c67-0a22-47ae-a457-28928814a337" (UID: "816b8c67-0a22-47ae-a457-28928814a337"). InnerVolumeSpecName "kube-api-access-f65g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.545825 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.635287 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-operator-scripts\") pod \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.635441 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49bvz\" (UniqueName: \"kubernetes.io/projected/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-kube-api-access-49bvz\") pod \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\" (UID: \"8a787c3f-0ec1-404c-abc0-c57508c7e5b9\") " Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.635879 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f65g2\" (UniqueName: \"kubernetes.io/projected/816b8c67-0a22-47ae-a457-28928814a337-kube-api-access-f65g2\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.635894 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/816b8c67-0a22-47ae-a457-28928814a337-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.636138 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8a787c3f-0ec1-404c-abc0-c57508c7e5b9" (UID: "8a787c3f-0ec1-404c-abc0-c57508c7e5b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.648218 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-kube-api-access-49bvz" (OuterVolumeSpecName: "kube-api-access-49bvz") pod "8a787c3f-0ec1-404c-abc0-c57508c7e5b9" (UID: "8a787c3f-0ec1-404c-abc0-c57508c7e5b9"). InnerVolumeSpecName "kube-api-access-49bvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.737372 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49bvz\" (UniqueName: \"kubernetes.io/projected/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-kube-api-access-49bvz\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.737719 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8a787c3f-0ec1-404c-abc0-c57508c7e5b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.860788 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vqnvh" event={"ID":"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296","Type":"ContainerStarted","Data":"a34fdffe67a6a0105ac052990234e37e7de8c08c0cbf6d3fedbc8560ec8fbb4f"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.870177 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6rs9" event={"ID":"8a787c3f-0ec1-404c-abc0-c57508c7e5b9","Type":"ContainerDied","Data":"eaacc5dcb8d4fc0c689b867b14501c3f1605447462e75c57465f10001f0cb4c1"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.870335 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaacc5dcb8d4fc0c689b867b14501c3f1605447462e75c57465f10001f0cb4c1" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.870398 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6rs9" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.879401 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kvgg9" event={"ID":"5849599a-f7e9-4ea2-982c-5388be3d7e8d","Type":"ContainerStarted","Data":"4f33adc77c749231fe4ec984d3b6ec4e2697bf51c43d140d2f8744fc3441a853"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.885451 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-g9rp9" event={"ID":"b37b8e53-92e0-47e4-a1c4-88e38ee775ff","Type":"ContainerStarted","Data":"237bed5f066c3969e56e804cbd43a5f00e6246c7dd941447688791485c5a2af8"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.886961 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.889591 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0bbf-account-create-update-n9d8q" event={"ID":"816b8c67-0a22-47ae-a457-28928814a337","Type":"ContainerDied","Data":"618bedabba9b6c8e20bf302848fc9573b96f20c41e5a0f81a88b8d077163e3a6"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.889633 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="618bedabba9b6c8e20bf302848fc9573b96f20c41e5a0f81a88b8d077163e3a6" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.889727 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0bbf-account-create-update-n9d8q" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.897365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d82-account-create-update-sw6fr" event={"ID":"a9c45310-fd46-4964-8cfe-e6ab7a1e1971","Type":"ContainerStarted","Data":"f401f182f065eb7abd925f590524d6803d5a4476a4bc1ed0664592e555037836"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.897419 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d82-account-create-update-sw6fr" event={"ID":"a9c45310-fd46-4964-8cfe-e6ab7a1e1971","Type":"ContainerStarted","Data":"4b01956575d238701ecb9a9038e6498c0153b369ed7922e5d7cfd0d5035a371c"} Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.919069 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-vqnvh" podStartSLOduration=4.919043266 podStartE2EDuration="4.919043266s" podCreationTimestamp="2025-12-09 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:29.881917381 +0000 UTC m=+1262.442398662" watchObservedRunningTime="2025-12-09 12:27:29.919043266 +0000 UTC m=+1262.479524317" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.940513 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kvgg9" podStartSLOduration=1.9064374979999998 podStartE2EDuration="6.940488667s" podCreationTimestamp="2025-12-09 12:27:23 +0000 UTC" firstStartedPulling="2025-12-09 12:27:24.398821996 +0000 UTC m=+1256.959303047" lastFinishedPulling="2025-12-09 12:27:29.432873165 +0000 UTC m=+1261.993354216" observedRunningTime="2025-12-09 12:27:29.906686702 +0000 UTC m=+1262.467167763" watchObservedRunningTime="2025-12-09 12:27:29.940488667 +0000 UTC m=+1262.500969718" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.977897 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fd9db"] Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.984978 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-g9rp9" podStartSLOduration=11.984957271 podStartE2EDuration="11.984957271s" podCreationTimestamp="2025-12-09 12:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:29.947827406 +0000 UTC m=+1262.508308457" watchObservedRunningTime="2025-12-09 12:27:29.984957271 +0000 UTC m=+1262.545438322" Dec 09 12:27:29 crc kubenswrapper[4970]: I1209 12:27:29.998071 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7d82-account-create-update-sw6fr" podStartSLOduration=4.9980561550000004 podStartE2EDuration="4.998056155s" podCreationTimestamp="2025-12-09 12:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:29.977020206 +0000 UTC m=+1262.537501257" watchObservedRunningTime="2025-12-09 12:27:29.998056155 +0000 UTC m=+1262.558537206" Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.128612 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-375f-account-create-update-pbtc9"] Dec 09 12:27:30 crc kubenswrapper[4970]: W1209 12:27:30.160161 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dfafdd9_2928_4ca6_b298_fcb9eb37b9ab.slice/crio-37962a5c6b524f99eb8151cf107569eb89b435ff6e605c3a778a5795145ddd6f WatchSource:0}: Error finding container 37962a5c6b524f99eb8151cf107569eb89b435ff6e605c3a778a5795145ddd6f: Status 404 returned error can't find the container with id 37962a5c6b524f99eb8151cf107569eb89b435ff6e605c3a778a5795145ddd6f Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.908344 4970 generic.go:334] "Generic (PLEG): container finished" podID="a9c45310-fd46-4964-8cfe-e6ab7a1e1971" containerID="f401f182f065eb7abd925f590524d6803d5a4476a4bc1ed0664592e555037836" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.908645 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d82-account-create-update-sw6fr" event={"ID":"a9c45310-fd46-4964-8cfe-e6ab7a1e1971","Type":"ContainerDied","Data":"f401f182f065eb7abd925f590524d6803d5a4476a4bc1ed0664592e555037836"} Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.910071 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" containerID="a34fdffe67a6a0105ac052990234e37e7de8c08c0cbf6d3fedbc8560ec8fbb4f" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.910120 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vqnvh" event={"ID":"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296","Type":"ContainerDied","Data":"a34fdffe67a6a0105ac052990234e37e7de8c08c0cbf6d3fedbc8560ec8fbb4f"} Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.911635 4970 generic.go:334] "Generic (PLEG): container finished" podID="c9350a1c-3918-4895-8383-f7c306cb6063" containerID="56eb3c45b89f81276c0e8d18da3cb7f5b0d3227b017713fbb3916be03cd8c361" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.911692 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" event={"ID":"c9350a1c-3918-4895-8383-f7c306cb6063","Type":"ContainerDied","Data":"56eb3c45b89f81276c0e8d18da3cb7f5b0d3227b017713fbb3916be03cd8c361"} Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.911713 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" event={"ID":"c9350a1c-3918-4895-8383-f7c306cb6063","Type":"ContainerStarted","Data":"3c38dda870dee3ed586007af34bfcb0004defbb170102fd3db8d4c539dbd55b3"} Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.912889 4970 generic.go:334] "Generic (PLEG): container finished" podID="7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" containerID="e0b315191296c8a259cf12f4879f7c8c9d3e7fdcadc5dc0c0e1e39984f065736" exitCode=0 Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.914023 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" event={"ID":"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab","Type":"ContainerDied","Data":"e0b315191296c8a259cf12f4879f7c8c9d3e7fdcadc5dc0c0e1e39984f065736"} Dec 09 12:27:30 crc kubenswrapper[4970]: I1209 12:27:30.914090 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" event={"ID":"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab","Type":"ContainerStarted","Data":"37962a5c6b524f99eb8151cf107569eb89b435ff6e605c3a778a5795145ddd6f"} Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.549612 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-qvmmx"] Dec 09 12:27:31 crc kubenswrapper[4970]: E1209 12:27:31.551218 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a787c3f-0ec1-404c-abc0-c57508c7e5b9" containerName="mariadb-database-create" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.551266 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a787c3f-0ec1-404c-abc0-c57508c7e5b9" containerName="mariadb-database-create" Dec 09 12:27:31 crc kubenswrapper[4970]: E1209 12:27:31.551323 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816b8c67-0a22-47ae-a457-28928814a337" containerName="mariadb-account-create-update" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.551335 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="816b8c67-0a22-47ae-a457-28928814a337" containerName="mariadb-account-create-update" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.551572 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a787c3f-0ec1-404c-abc0-c57508c7e5b9" containerName="mariadb-database-create" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.551602 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="816b8c67-0a22-47ae-a457-28928814a337" containerName="mariadb-account-create-update" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.552825 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.555288 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.555361 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-22qbn" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.575789 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qvmmx"] Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.589103 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-db-sync-config-data\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.589496 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-config-data\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.589975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwkw\" (UniqueName: \"kubernetes.io/projected/de81c9d6-10bd-46d2-ab77-74463359dc5a-kube-api-access-6zwkw\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.590032 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-combined-ca-bundle\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.691434 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zwkw\" (UniqueName: \"kubernetes.io/projected/de81c9d6-10bd-46d2-ab77-74463359dc5a-kube-api-access-6zwkw\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.691657 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-combined-ca-bundle\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.691801 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-db-sync-config-data\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.691879 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-config-data\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.698437 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-db-sync-config-data\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.698860 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-combined-ca-bundle\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.708637 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-config-data\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.714552 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zwkw\" (UniqueName: \"kubernetes.io/projected/de81c9d6-10bd-46d2-ab77-74463359dc5a-kube-api-access-6zwkw\") pod \"glance-db-sync-qvmmx\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:31 crc kubenswrapper[4970]: I1209 12:27:31.891967 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qvmmx" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.544905 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.611189 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-operator-scripts\") pod \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.611394 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsf57\" (UniqueName: \"kubernetes.io/projected/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-kube-api-access-wsf57\") pod \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\" (UID: \"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296\") " Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.612018 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" (UID: "c2b3fb48-ea66-4c6f-92ee-b8cd8b320296"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.619906 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-kube-api-access-wsf57" (OuterVolumeSpecName: "kube-api-access-wsf57") pod "c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" (UID: "c2b3fb48-ea66-4c6f-92ee-b8cd8b320296"). InnerVolumeSpecName "kube-api-access-wsf57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.716202 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsf57\" (UniqueName: \"kubernetes.io/projected/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-kube-api-access-wsf57\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.716258 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.786410 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.909999 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.916535 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.926712 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.941046 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" event={"ID":"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab","Type":"ContainerDied","Data":"37962a5c6b524f99eb8151cf107569eb89b435ff6e605c3a778a5795145ddd6f"} Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.941098 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37962a5c6b524f99eb8151cf107569eb89b435ff6e605c3a778a5795145ddd6f" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.941162 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-375f-account-create-update-pbtc9" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.948225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" event={"ID":"c9350a1c-3918-4895-8383-f7c306cb6063","Type":"ContainerDied","Data":"3c38dda870dee3ed586007af34bfcb0004defbb170102fd3db8d4c539dbd55b3"} Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.948288 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c38dda870dee3ed586007af34bfcb0004defbb170102fd3db8d4c539dbd55b3" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.948354 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fd9db" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.965407 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerStarted","Data":"35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc"} Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.967035 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7d82-account-create-update-sw6fr" event={"ID":"a9c45310-fd46-4964-8cfe-e6ab7a1e1971","Type":"ContainerDied","Data":"4b01956575d238701ecb9a9038e6498c0153b369ed7922e5d7cfd0d5035a371c"} Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.967062 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b01956575d238701ecb9a9038e6498c0153b369ed7922e5d7cfd0d5035a371c" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.967115 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7d82-account-create-update-sw6fr" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.979701 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vqnvh" event={"ID":"c2b3fb48-ea66-4c6f-92ee-b8cd8b320296","Type":"ContainerDied","Data":"e147737ea01ce157ad8c2504561bb45688849b3d02083f58d650fd9a4ab7764d"} Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.979739 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e147737ea01ce157ad8c2504561bb45688849b3d02083f58d650fd9a4ab7764d" Dec 09 12:27:32 crc kubenswrapper[4970]: I1209 12:27:32.979793 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vqnvh" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.022282 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftncg\" (UniqueName: \"kubernetes.io/projected/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-kube-api-access-ftncg\") pod \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.022597 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-operator-scripts\") pod \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\" (UID: \"7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab\") " Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.022777 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9350a1c-3918-4895-8383-f7c306cb6063-operator-scripts\") pod \"c9350a1c-3918-4895-8383-f7c306cb6063\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.022911 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgfj9\" (UniqueName: \"kubernetes.io/projected/c9350a1c-3918-4895-8383-f7c306cb6063-kube-api-access-tgfj9\") pod \"c9350a1c-3918-4895-8383-f7c306cb6063\" (UID: \"c9350a1c-3918-4895-8383-f7c306cb6063\") " Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.023371 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" (UID: "7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.024645 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9350a1c-3918-4895-8383-f7c306cb6063-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c9350a1c-3918-4895-8383-f7c306cb6063" (UID: "c9350a1c-3918-4895-8383-f7c306cb6063"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.025724 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.029087 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c9350a1c-3918-4895-8383-f7c306cb6063-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.049043 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-kube-api-access-ftncg" (OuterVolumeSpecName: "kube-api-access-ftncg") pod "7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" (UID: "7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab"). InnerVolumeSpecName "kube-api-access-ftncg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.061559 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9350a1c-3918-4895-8383-f7c306cb6063-kube-api-access-tgfj9" (OuterVolumeSpecName: "kube-api-access-tgfj9") pod "c9350a1c-3918-4895-8383-f7c306cb6063" (UID: "c9350a1c-3918-4895-8383-f7c306cb6063"). InnerVolumeSpecName "kube-api-access-tgfj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.133230 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgtnf\" (UniqueName: \"kubernetes.io/projected/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-kube-api-access-fgtnf\") pod \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.133408 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-operator-scripts\") pod \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\" (UID: \"a9c45310-fd46-4964-8cfe-e6ab7a1e1971\") " Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.135362 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftncg\" (UniqueName: \"kubernetes.io/projected/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab-kube-api-access-ftncg\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.135490 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgfj9\" (UniqueName: \"kubernetes.io/projected/c9350a1c-3918-4895-8383-f7c306cb6063-kube-api-access-tgfj9\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.136171 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9c45310-fd46-4964-8cfe-e6ab7a1e1971" (UID: "a9c45310-fd46-4964-8cfe-e6ab7a1e1971"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.140835 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-qvmmx"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.145040 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.870317337 podStartE2EDuration="1m5.145029389s" podCreationTimestamp="2025-12-09 12:26:28 +0000 UTC" firstStartedPulling="2025-12-09 12:26:43.681716989 +0000 UTC m=+1216.242198040" lastFinishedPulling="2025-12-09 12:27:31.956429041 +0000 UTC m=+1264.516910092" observedRunningTime="2025-12-09 12:27:33.012363827 +0000 UTC m=+1265.572844878" watchObservedRunningTime="2025-12-09 12:27:33.145029389 +0000 UTC m=+1265.705510440" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.177175 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-j77r5"] Dec 09 12:27:33 crc kubenswrapper[4970]: E1209 12:27:33.178186 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" containerName="mariadb-database-create" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.178208 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" containerName="mariadb-database-create" Dec 09 12:27:33 crc kubenswrapper[4970]: E1209 12:27:33.178240 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c45310-fd46-4964-8cfe-e6ab7a1e1971" containerName="mariadb-account-create-update" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.178272 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c45310-fd46-4964-8cfe-e6ab7a1e1971" containerName="mariadb-account-create-update" Dec 09 12:27:33 crc kubenswrapper[4970]: E1209 12:27:33.178308 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" containerName="mariadb-account-create-update" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.178317 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" containerName="mariadb-account-create-update" Dec 09 12:27:33 crc kubenswrapper[4970]: E1209 12:27:33.178361 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9350a1c-3918-4895-8383-f7c306cb6063" containerName="mariadb-database-create" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.178370 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9350a1c-3918-4895-8383-f7c306cb6063" containerName="mariadb-database-create" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.178937 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" containerName="mariadb-account-create-update" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.178985 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c45310-fd46-4964-8cfe-e6ab7a1e1971" containerName="mariadb-account-create-update" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.179002 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9350a1c-3918-4895-8383-f7c306cb6063" containerName="mariadb-database-create" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.179036 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" containerName="mariadb-database-create" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.180194 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j77r5"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.180333 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.186542 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-kube-api-access-fgtnf" (OuterVolumeSpecName: "kube-api-access-fgtnf") pod "a9c45310-fd46-4964-8cfe-e6ab7a1e1971" (UID: "a9c45310-fd46-4964-8cfe-e6ab7a1e1971"). InnerVolumeSpecName "kube-api-access-fgtnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.237975 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgtnf\" (UniqueName: \"kubernetes.io/projected/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-kube-api-access-fgtnf\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.238294 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9c45310-fd46-4964-8cfe-e6ab7a1e1971-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.281674 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-69sxs"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.286814 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.313709 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-69sxs"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.330213 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-618b-account-create-update-k6f7v"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.332704 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.334977 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.340670 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0c2ad87-9313-4c2d-b691-0c907b91612a-operator-scripts\") pod \"cinder-db-create-j77r5\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.340834 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckb9s\" (UniqueName: \"kubernetes.io/projected/c0c2ad87-9313-4c2d-b691-0c907b91612a-kube-api-access-ckb9s\") pod \"cinder-db-create-j77r5\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.352412 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-618b-account-create-update-k6f7v"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.443231 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lbp\" (UniqueName: \"kubernetes.io/projected/98ebedad-42a0-4ee4-ade1-fd913bd026d6-kube-api-access-m2lbp\") pod \"cinder-618b-account-create-update-k6f7v\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.443331 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckb9s\" (UniqueName: \"kubernetes.io/projected/c0c2ad87-9313-4c2d-b691-0c907b91612a-kube-api-access-ckb9s\") pod \"cinder-db-create-j77r5\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.443419 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8qr\" (UniqueName: \"kubernetes.io/projected/d91a3aef-64fa-4f87-a85f-64723d75894a-kube-api-access-rb8qr\") pod \"barbican-db-create-69sxs\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.443681 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ebedad-42a0-4ee4-ade1-fd913bd026d6-operator-scripts\") pod \"cinder-618b-account-create-update-k6f7v\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.443719 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d91a3aef-64fa-4f87-a85f-64723d75894a-operator-scripts\") pod \"barbican-db-create-69sxs\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.443753 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0c2ad87-9313-4c2d-b691-0c907b91612a-operator-scripts\") pod \"cinder-db-create-j77r5\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.444621 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0c2ad87-9313-4c2d-b691-0c907b91612a-operator-scripts\") pod \"cinder-db-create-j77r5\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.460532 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckb9s\" (UniqueName: \"kubernetes.io/projected/c0c2ad87-9313-4c2d-b691-0c907b91612a-kube-api-access-ckb9s\") pod \"cinder-db-create-j77r5\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.515874 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.516228 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-67k7r"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.517672 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.541116 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-15fb-account-create-update-nrf89"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.542927 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.546107 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ebedad-42a0-4ee4-ade1-fd913bd026d6-operator-scripts\") pod \"cinder-618b-account-create-update-k6f7v\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.546171 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d91a3aef-64fa-4f87-a85f-64723d75894a-operator-scripts\") pod \"barbican-db-create-69sxs\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.546310 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2lbp\" (UniqueName: \"kubernetes.io/projected/98ebedad-42a0-4ee4-ade1-fd913bd026d6-kube-api-access-m2lbp\") pod \"cinder-618b-account-create-update-k6f7v\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.546420 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8qr\" (UniqueName: \"kubernetes.io/projected/d91a3aef-64fa-4f87-a85f-64723d75894a-kube-api-access-rb8qr\") pod \"barbican-db-create-69sxs\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.547626 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ebedad-42a0-4ee4-ade1-fd913bd026d6-operator-scripts\") pod \"cinder-618b-account-create-update-k6f7v\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.547773 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d91a3aef-64fa-4f87-a85f-64723d75894a-operator-scripts\") pod \"barbican-db-create-69sxs\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.554142 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.573316 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-67k7r"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.575881 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2lbp\" (UniqueName: \"kubernetes.io/projected/98ebedad-42a0-4ee4-ade1-fd913bd026d6-kube-api-access-m2lbp\") pod \"cinder-618b-account-create-update-k6f7v\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.582274 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8qr\" (UniqueName: \"kubernetes.io/projected/d91a3aef-64fa-4f87-a85f-64723d75894a-kube-api-access-rb8qr\") pod \"barbican-db-create-69sxs\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.623305 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-15fb-account-create-update-nrf89"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.623993 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.648344 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-operator-scripts\") pod \"heat-db-create-67k7r\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.648443 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85prb\" (UniqueName: \"kubernetes.io/projected/883c54c6-9452-4305-94e9-13d5cefd22c8-kube-api-access-85prb\") pod \"heat-15fb-account-create-update-nrf89\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.648595 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bksg\" (UniqueName: \"kubernetes.io/projected/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-kube-api-access-5bksg\") pod \"heat-db-create-67k7r\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.648692 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883c54c6-9452-4305-94e9-13d5cefd22c8-operator-scripts\") pod \"heat-15fb-account-create-update-nrf89\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.652564 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-68fgf"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.655410 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.661756 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.663446 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-68fgf"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.683139 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-3015-account-create-update-xs7kl"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.684706 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.687060 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.709393 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3015-account-create-update-xs7kl"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.750756 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883c54c6-9452-4305-94e9-13d5cefd22c8-operator-scripts\") pod \"heat-15fb-account-create-update-nrf89\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.750838 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-operator-scripts\") pod \"neutron-db-create-68fgf\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.750888 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd25d\" (UniqueName: \"kubernetes.io/projected/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-kube-api-access-nd25d\") pod \"neutron-db-create-68fgf\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.750939 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e1b5a6d-bae4-4e66-9b80-1a551d907036-operator-scripts\") pod \"barbican-3015-account-create-update-xs7kl\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.751016 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-operator-scripts\") pod \"heat-db-create-67k7r\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.751069 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85prb\" (UniqueName: \"kubernetes.io/projected/883c54c6-9452-4305-94e9-13d5cefd22c8-kube-api-access-85prb\") pod \"heat-15fb-account-create-update-nrf89\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.751107 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptkpz\" (UniqueName: \"kubernetes.io/projected/6e1b5a6d-bae4-4e66-9b80-1a551d907036-kube-api-access-ptkpz\") pod \"barbican-3015-account-create-update-xs7kl\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.751185 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bksg\" (UniqueName: \"kubernetes.io/projected/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-kube-api-access-5bksg\") pod \"heat-db-create-67k7r\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.752962 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883c54c6-9452-4305-94e9-13d5cefd22c8-operator-scripts\") pod \"heat-15fb-account-create-update-nrf89\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.764690 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-operator-scripts\") pod \"heat-db-create-67k7r\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.792930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bksg\" (UniqueName: \"kubernetes.io/projected/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-kube-api-access-5bksg\") pod \"heat-db-create-67k7r\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.813609 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85prb\" (UniqueName: \"kubernetes.io/projected/883c54c6-9452-4305-94e9-13d5cefd22c8-kube-api-access-85prb\") pod \"heat-15fb-account-create-update-nrf89\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.841684 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-67k7r" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.854570 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e1b5a6d-bae4-4e66-9b80-1a551d907036-operator-scripts\") pod \"barbican-3015-account-create-update-xs7kl\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.854671 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptkpz\" (UniqueName: \"kubernetes.io/projected/6e1b5a6d-bae4-4e66-9b80-1a551d907036-kube-api-access-ptkpz\") pod \"barbican-3015-account-create-update-xs7kl\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.854821 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-operator-scripts\") pod \"neutron-db-create-68fgf\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.854854 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd25d\" (UniqueName: \"kubernetes.io/projected/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-kube-api-access-nd25d\") pod \"neutron-db-create-68fgf\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.856111 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e1b5a6d-bae4-4e66-9b80-1a551d907036-operator-scripts\") pod \"barbican-3015-account-create-update-xs7kl\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.856807 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-operator-scripts\") pod \"neutron-db-create-68fgf\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.867760 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.877794 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4118-account-create-update-bzvb2"] Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.885698 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.889618 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.897590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptkpz\" (UniqueName: \"kubernetes.io/projected/6e1b5a6d-bae4-4e66-9b80-1a551d907036-kube-api-access-ptkpz\") pod \"barbican-3015-account-create-update-xs7kl\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.907143 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd25d\" (UniqueName: \"kubernetes.io/projected/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-kube-api-access-nd25d\") pod \"neutron-db-create-68fgf\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.957029 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e973b8-701c-4376-9b78-c575d37f901b-operator-scripts\") pod \"neutron-4118-account-create-update-bzvb2\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:33 crc kubenswrapper[4970]: I1209 12:27:33.957412 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9ld6\" (UniqueName: \"kubernetes.io/projected/33e973b8-701c-4376-9b78-c575d37f901b-kube-api-access-g9ld6\") pod \"neutron-4118-account-create-update-bzvb2\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:33.998513 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4118-account-create-update-bzvb2"] Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.004863 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.019408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qvmmx" event={"ID":"de81c9d6-10bd-46d2-ab77-74463359dc5a","Type":"ContainerStarted","Data":"0af3ad122578b7c273bbf9e59d42a2c31ee9e18beec5c76cbb597580b023388d"} Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.031097 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.061881 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e973b8-701c-4376-9b78-c575d37f901b-operator-scripts\") pod \"neutron-4118-account-create-update-bzvb2\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.062048 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9ld6\" (UniqueName: \"kubernetes.io/projected/33e973b8-701c-4376-9b78-c575d37f901b-kube-api-access-g9ld6\") pod \"neutron-4118-account-create-update-bzvb2\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.063235 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e973b8-701c-4376-9b78-c575d37f901b-operator-scripts\") pod \"neutron-4118-account-create-update-bzvb2\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.102458 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9ld6\" (UniqueName: \"kubernetes.io/projected/33e973b8-701c-4376-9b78-c575d37f901b-kube-api-access-g9ld6\") pod \"neutron-4118-account-create-update-bzvb2\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.348182 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.355293 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-j77r5"] Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.386327 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.389450 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.420121 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.448675 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.487079 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq685\" (UniqueName: \"kubernetes.io/projected/581d4f17-168a-463d-8de2-10e3e31a590d-kube-api-access-xq685\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.487135 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.487181 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-config-data\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.535616 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.596124 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq685\" (UniqueName: \"kubernetes.io/projected/581d4f17-168a-463d-8de2-10e3e31a590d-kube-api-access-xq685\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.596175 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.596219 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-config-data\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.598276 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-69sxs"] Dec 09 12:27:34 crc kubenswrapper[4970]: W1209 12:27:34.599109 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd91a3aef_64fa_4f87_a85f_64723d75894a.slice/crio-767892798fe2e0f6fc1e9ff84c0a7039a3f4549a8a19fa1aadfdfa96a001b599 WatchSource:0}: Error finding container 767892798fe2e0f6fc1e9ff84c0a7039a3f4549a8a19fa1aadfdfa96a001b599: Status 404 returned error can't find the container with id 767892798fe2e0f6fc1e9ff84c0a7039a3f4549a8a19fa1aadfdfa96a001b599 Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.602456 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-config-data\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.603448 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.614716 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-618b-account-create-update-k6f7v"] Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.626638 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq685\" (UniqueName: \"kubernetes.io/projected/581d4f17-168a-463d-8de2-10e3e31a590d-kube-api-access-xq685\") pod \"mysqld-exporter-0\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.792139 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.873107 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-67k7r"] Dec 09 12:27:34 crc kubenswrapper[4970]: I1209 12:27:34.948417 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-68fgf"] Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.006321 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:35 crc kubenswrapper[4970]: E1209 12:27:35.006580 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 12:27:35 crc kubenswrapper[4970]: E1209 12:27:35.006612 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 12:27:35 crc kubenswrapper[4970]: E1209 12:27:35.006670 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift podName:bee29a58-7867-4543-bb4e-c19528625b1a nodeName:}" failed. No retries permitted until 2025-12-09 12:27:51.006651216 +0000 UTC m=+1283.567132267 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift") pod "swift-storage-0" (UID: "bee29a58-7867-4543-bb4e-c19528625b1a") : configmap "swift-ring-files" not found Dec 09 12:27:35 crc kubenswrapper[4970]: W1209 12:27:35.024129 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod965b8097_10dd_4a96_b33b_a0b6a3d5e35f.slice/crio-048fecac0fc66bf6d5b1fc99ad91064f8627a4e1ad9cfd8d48407828fe0eb3ff WatchSource:0}: Error finding container 048fecac0fc66bf6d5b1fc99ad91064f8627a4e1ad9cfd8d48407828fe0eb3ff: Status 404 returned error can't find the container with id 048fecac0fc66bf6d5b1fc99ad91064f8627a4e1ad9cfd8d48407828fe0eb3ff Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.067548 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-618b-account-create-update-k6f7v" event={"ID":"98ebedad-42a0-4ee4-ade1-fd913bd026d6","Type":"ContainerStarted","Data":"42d40978701d662a5b06d3a0bfa2d40cfb6aea03e1c9781801e62e3e30ee344c"} Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.067597 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-618b-account-create-update-k6f7v" event={"ID":"98ebedad-42a0-4ee4-ade1-fd913bd026d6","Type":"ContainerStarted","Data":"3a57ff79b21502e684b9de3aa0d7b9a85982d3b6cbb35e75a9dd33214ddf4429"} Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.074010 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-69sxs" event={"ID":"d91a3aef-64fa-4f87-a85f-64723d75894a","Type":"ContainerStarted","Data":"767892798fe2e0f6fc1e9ff84c0a7039a3f4549a8a19fa1aadfdfa96a001b599"} Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.102857 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j77r5" event={"ID":"c0c2ad87-9313-4c2d-b691-0c907b91612a","Type":"ContainerStarted","Data":"d5014de01302e3ee549b56bb1e3bfbcfc669a6135c51ec2866bd62aabd776ad0"} Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.102894 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j77r5" event={"ID":"c0c2ad87-9313-4c2d-b691-0c907b91612a","Type":"ContainerStarted","Data":"27251a4f9b0ee8b833057042af5c13ef35c382bc1e1422ecb237bb702434421a"} Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.119892 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-67k7r" event={"ID":"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a","Type":"ContainerStarted","Data":"26f5c1321834c3e58da2e051e46c53e70fe2bea7e00dfce00de0ab1d8c292314"} Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.131237 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-618b-account-create-update-k6f7v" podStartSLOduration=2.131218989 podStartE2EDuration="2.131218989s" podCreationTimestamp="2025-12-09 12:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:35.100630581 +0000 UTC m=+1267.661111632" watchObservedRunningTime="2025-12-09 12:27:35.131218989 +0000 UTC m=+1267.691700030" Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.146520 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-3015-account-create-update-xs7kl"] Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.155570 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-15fb-account-create-update-nrf89"] Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.165335 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-j77r5" podStartSLOduration=2.165311932 podStartE2EDuration="2.165311932s" podCreationTimestamp="2025-12-09 12:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:27:35.127977501 +0000 UTC m=+1267.688458552" watchObservedRunningTime="2025-12-09 12:27:35.165311932 +0000 UTC m=+1267.725792983" Dec 09 12:27:35 crc kubenswrapper[4970]: W1209 12:27:35.183990 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod883c54c6_9452_4305_94e9_13d5cefd22c8.slice/crio-98ea8d297207b901cc052904951fe3992b2659d84ad606b8c646c46e0cf99150 WatchSource:0}: Error finding container 98ea8d297207b901cc052904951fe3992b2659d84ad606b8c646c46e0cf99150: Status 404 returned error can't find the container with id 98ea8d297207b901cc052904951fe3992b2659d84ad606b8c646c46e0cf99150 Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.285687 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4118-account-create-update-bzvb2"] Dec 09 12:27:35 crc kubenswrapper[4970]: I1209 12:27:35.532171 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.132178 4970 generic.go:334] "Generic (PLEG): container finished" podID="d91a3aef-64fa-4f87-a85f-64723d75894a" containerID="bab086407cb1e5638c53046c47ccb87af2364b0463595cc65951b194d6623114" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.132346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-69sxs" event={"ID":"d91a3aef-64fa-4f87-a85f-64723d75894a","Type":"ContainerDied","Data":"bab086407cb1e5638c53046c47ccb87af2364b0463595cc65951b194d6623114"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.137191 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"581d4f17-168a-463d-8de2-10e3e31a590d","Type":"ContainerStarted","Data":"b64fb2046325da78976b710058ecf5c27f89aa0f18e47f86bb5dcb0cccc4e126"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.139385 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e1b5a6d-bae4-4e66-9b80-1a551d907036" containerID="a21ea59e5797cd0d4f50af32ab388990c7e45331a95bd1718b87235bb932ef3f" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.139453 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3015-account-create-update-xs7kl" event={"ID":"6e1b5a6d-bae4-4e66-9b80-1a551d907036","Type":"ContainerDied","Data":"a21ea59e5797cd0d4f50af32ab388990c7e45331a95bd1718b87235bb932ef3f"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.139509 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3015-account-create-update-xs7kl" event={"ID":"6e1b5a6d-bae4-4e66-9b80-1a551d907036","Type":"ContainerStarted","Data":"82fb746b91bb69b88a21cb907c2cdf048489bb2158d6859b301de61a23066d3f"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.144179 4970 generic.go:334] "Generic (PLEG): container finished" podID="33e973b8-701c-4376-9b78-c575d37f901b" containerID="a8b5a02f6c119357255405756cbad1a9bc2203cf0c1237a87424276af506ff74" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.144232 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4118-account-create-update-bzvb2" event={"ID":"33e973b8-701c-4376-9b78-c575d37f901b","Type":"ContainerDied","Data":"a8b5a02f6c119357255405756cbad1a9bc2203cf0c1237a87424276af506ff74"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.144267 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4118-account-create-update-bzvb2" event={"ID":"33e973b8-701c-4376-9b78-c575d37f901b","Type":"ContainerStarted","Data":"04378c49c5fef66134e91211e2c882e7d74e43f8b4966d619c2f292cb6d3a69e"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.149298 4970 generic.go:334] "Generic (PLEG): container finished" podID="883c54c6-9452-4305-94e9-13d5cefd22c8" containerID="9b0c8785223be68c19868333d78fb8be58ac176ad24bd768f7dba22251390577" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.149465 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-15fb-account-create-update-nrf89" event={"ID":"883c54c6-9452-4305-94e9-13d5cefd22c8","Type":"ContainerDied","Data":"9b0c8785223be68c19868333d78fb8be58ac176ad24bd768f7dba22251390577"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.149522 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-15fb-account-create-update-nrf89" event={"ID":"883c54c6-9452-4305-94e9-13d5cefd22c8","Type":"ContainerStarted","Data":"98ea8d297207b901cc052904951fe3992b2659d84ad606b8c646c46e0cf99150"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.155944 4970 generic.go:334] "Generic (PLEG): container finished" podID="98ebedad-42a0-4ee4-ade1-fd913bd026d6" containerID="42d40978701d662a5b06d3a0bfa2d40cfb6aea03e1c9781801e62e3e30ee344c" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.156188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-618b-account-create-update-k6f7v" event={"ID":"98ebedad-42a0-4ee4-ade1-fd913bd026d6","Type":"ContainerDied","Data":"42d40978701d662a5b06d3a0bfa2d40cfb6aea03e1c9781801e62e3e30ee344c"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.157772 4970 generic.go:334] "Generic (PLEG): container finished" podID="965b8097-10dd-4a96-b33b-a0b6a3d5e35f" containerID="458c6847e526a6c0c70552dd9f86d15eff7f762ee6d909197a05005ff05556da" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.157893 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68fgf" event={"ID":"965b8097-10dd-4a96-b33b-a0b6a3d5e35f","Type":"ContainerDied","Data":"458c6847e526a6c0c70552dd9f86d15eff7f762ee6d909197a05005ff05556da"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.158088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68fgf" event={"ID":"965b8097-10dd-4a96-b33b-a0b6a3d5e35f","Type":"ContainerStarted","Data":"048fecac0fc66bf6d5b1fc99ad91064f8627a4e1ad9cfd8d48407828fe0eb3ff"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.162113 4970 generic.go:334] "Generic (PLEG): container finished" podID="c0c2ad87-9313-4c2d-b691-0c907b91612a" containerID="d5014de01302e3ee549b56bb1e3bfbcfc669a6135c51ec2866bd62aabd776ad0" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.162193 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j77r5" event={"ID":"c0c2ad87-9313-4c2d-b691-0c907b91612a","Type":"ContainerDied","Data":"d5014de01302e3ee549b56bb1e3bfbcfc669a6135c51ec2866bd62aabd776ad0"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.168560 4970 generic.go:334] "Generic (PLEG): container finished" podID="2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" containerID="fe47a908b5bd18e49ce18207ded9b53676917990d658fcdd7786aa3d96ae23d0" exitCode=0 Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.168692 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-67k7r" event={"ID":"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a","Type":"ContainerDied","Data":"fe47a908b5bd18e49ce18207ded9b53676917990d658fcdd7786aa3d96ae23d0"} Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.450310 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-k4956"] Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.454053 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.457013 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.457292 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m6jjm" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.457835 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.465547 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.467854 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k4956"] Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.576954 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-config-data\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.577180 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-combined-ca-bundle\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.577221 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rxq2\" (UniqueName: \"kubernetes.io/projected/a416bd4a-683a-43cf-867a-fb60427671a4-kube-api-access-9rxq2\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.680652 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-config-data\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.680884 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-combined-ca-bundle\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.680933 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rxq2\" (UniqueName: \"kubernetes.io/projected/a416bd4a-683a-43cf-867a-fb60427671a4-kube-api-access-9rxq2\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.694419 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-combined-ca-bundle\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.702001 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-config-data\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.704545 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rxq2\" (UniqueName: \"kubernetes.io/projected/a416bd4a-683a-43cf-867a-fb60427671a4-kube-api-access-9rxq2\") pod \"keystone-db-sync-k4956\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:36 crc kubenswrapper[4970]: I1209 12:27:36.794782 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4956" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.118009 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.194173 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-618b-account-create-update-k6f7v" event={"ID":"98ebedad-42a0-4ee4-ade1-fd913bd026d6","Type":"ContainerDied","Data":"3a57ff79b21502e684b9de3aa0d7b9a85982d3b6cbb35e75a9dd33214ddf4429"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.194228 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a57ff79b21502e684b9de3aa0d7b9a85982d3b6cbb35e75a9dd33214ddf4429" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.197261 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"581d4f17-168a-463d-8de2-10e3e31a590d","Type":"ContainerStarted","Data":"26c5e039a23b7c1cdc956da09bd71a6bf631668f021abe0fea2f1eb90c7a5c79"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.201617 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-j77r5" event={"ID":"c0c2ad87-9313-4c2d-b691-0c907b91612a","Type":"ContainerDied","Data":"27251a4f9b0ee8b833057042af5c13ef35c382bc1e1422ecb237bb702434421a"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.201653 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27251a4f9b0ee8b833057042af5c13ef35c382bc1e1422ecb237bb702434421a" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.210461 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-3015-account-create-update-xs7kl" event={"ID":"6e1b5a6d-bae4-4e66-9b80-1a551d907036","Type":"ContainerDied","Data":"82fb746b91bb69b88a21cb907c2cdf048489bb2158d6859b301de61a23066d3f"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.210513 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fb746b91bb69b88a21cb907c2cdf048489bb2158d6859b301de61a23066d3f" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.223774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4118-account-create-update-bzvb2" event={"ID":"33e973b8-701c-4376-9b78-c575d37f901b","Type":"ContainerDied","Data":"04378c49c5fef66134e91211e2c882e7d74e43f8b4966d619c2f292cb6d3a69e"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.223824 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04378c49c5fef66134e91211e2c882e7d74e43f8b4966d619c2f292cb6d3a69e" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.228203 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.643397815 podStartE2EDuration="4.228185128s" podCreationTimestamp="2025-12-09 12:27:34 +0000 UTC" firstStartedPulling="2025-12-09 12:27:35.60299455 +0000 UTC m=+1268.163475601" lastFinishedPulling="2025-12-09 12:27:37.187781863 +0000 UTC m=+1269.748262914" observedRunningTime="2025-12-09 12:27:38.21677773 +0000 UTC m=+1270.777258781" watchObservedRunningTime="2025-12-09 12:27:38.228185128 +0000 UTC m=+1270.788666179" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.229356 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-67k7r" event={"ID":"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a","Type":"ContainerDied","Data":"26f5c1321834c3e58da2e051e46c53e70fe2bea7e00dfce00de0ab1d8c292314"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.229381 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f5c1321834c3e58da2e051e46c53e70fe2bea7e00dfce00de0ab1d8c292314" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.230599 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-15fb-account-create-update-nrf89" event={"ID":"883c54c6-9452-4305-94e9-13d5cefd22c8","Type":"ContainerDied","Data":"98ea8d297207b901cc052904951fe3992b2659d84ad606b8c646c46e0cf99150"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.230617 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ea8d297207b901cc052904951fe3992b2659d84ad606b8c646c46e0cf99150" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.232285 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-68fgf" event={"ID":"965b8097-10dd-4a96-b33b-a0b6a3d5e35f","Type":"ContainerDied","Data":"048fecac0fc66bf6d5b1fc99ad91064f8627a4e1ad9cfd8d48407828fe0eb3ff"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.232302 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="048fecac0fc66bf6d5b1fc99ad91064f8627a4e1ad9cfd8d48407828fe0eb3ff" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.234452 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-69sxs" event={"ID":"d91a3aef-64fa-4f87-a85f-64723d75894a","Type":"ContainerDied","Data":"767892798fe2e0f6fc1e9ff84c0a7039a3f4549a8a19fa1aadfdfa96a001b599"} Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.234474 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="767892798fe2e0f6fc1e9ff84c0a7039a3f4549a8a19fa1aadfdfa96a001b599" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.296743 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.303570 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.326686 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.338184 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.351034 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.367795 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.383099 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.408457 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-67k7r" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.440870 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0c2ad87-9313-4c2d-b691-0c907b91612a-operator-scripts\") pod \"c0c2ad87-9313-4c2d-b691-0c907b91612a\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.440921 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883c54c6-9452-4305-94e9-13d5cefd22c8-operator-scripts\") pod \"883c54c6-9452-4305-94e9-13d5cefd22c8\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.440970 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd25d\" (UniqueName: \"kubernetes.io/projected/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-kube-api-access-nd25d\") pod \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.441028 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckb9s\" (UniqueName: \"kubernetes.io/projected/c0c2ad87-9313-4c2d-b691-0c907b91612a-kube-api-access-ckb9s\") pod \"c0c2ad87-9313-4c2d-b691-0c907b91612a\" (UID: \"c0c2ad87-9313-4c2d-b691-0c907b91612a\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.441074 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-operator-scripts\") pod \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\" (UID: \"965b8097-10dd-4a96-b33b-a0b6a3d5e35f\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.441097 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85prb\" (UniqueName: \"kubernetes.io/projected/883c54c6-9452-4305-94e9-13d5cefd22c8-kube-api-access-85prb\") pod \"883c54c6-9452-4305-94e9-13d5cefd22c8\" (UID: \"883c54c6-9452-4305-94e9-13d5cefd22c8\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.441395 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.442725 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "965b8097-10dd-4a96-b33b-a0b6a3d5e35f" (UID: "965b8097-10dd-4a96-b33b-a0b6a3d5e35f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.443152 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0c2ad87-9313-4c2d-b691-0c907b91612a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0c2ad87-9313-4c2d-b691-0c907b91612a" (UID: "c0c2ad87-9313-4c2d-b691-0c907b91612a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.443273 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/883c54c6-9452-4305-94e9-13d5cefd22c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "883c54c6-9452-4305-94e9-13d5cefd22c8" (UID: "883c54c6-9452-4305-94e9-13d5cefd22c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.451898 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883c54c6-9452-4305-94e9-13d5cefd22c8-kube-api-access-85prb" (OuterVolumeSpecName: "kube-api-access-85prb") pod "883c54c6-9452-4305-94e9-13d5cefd22c8" (UID: "883c54c6-9452-4305-94e9-13d5cefd22c8"). InnerVolumeSpecName "kube-api-access-85prb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.452053 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-kube-api-access-nd25d" (OuterVolumeSpecName: "kube-api-access-nd25d") pod "965b8097-10dd-4a96-b33b-a0b6a3d5e35f" (UID: "965b8097-10dd-4a96-b33b-a0b6a3d5e35f"). InnerVolumeSpecName "kube-api-access-nd25d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.452287 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c2ad87-9313-4c2d-b691-0c907b91612a-kube-api-access-ckb9s" (OuterVolumeSpecName: "kube-api-access-ckb9s") pod "c0c2ad87-9313-4c2d-b691-0c907b91612a" (UID: "c0c2ad87-9313-4c2d-b691-0c907b91612a"). InnerVolumeSpecName "kube-api-access-ckb9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.470625 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-k4956"] Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542351 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9ld6\" (UniqueName: \"kubernetes.io/projected/33e973b8-701c-4376-9b78-c575d37f901b-kube-api-access-g9ld6\") pod \"33e973b8-701c-4376-9b78-c575d37f901b\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542421 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8qr\" (UniqueName: \"kubernetes.io/projected/d91a3aef-64fa-4f87-a85f-64723d75894a-kube-api-access-rb8qr\") pod \"d91a3aef-64fa-4f87-a85f-64723d75894a\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542535 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkpz\" (UniqueName: \"kubernetes.io/projected/6e1b5a6d-bae4-4e66-9b80-1a551d907036-kube-api-access-ptkpz\") pod \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542615 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-operator-scripts\") pod \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542657 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ebedad-42a0-4ee4-ade1-fd913bd026d6-operator-scripts\") pod \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542706 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bksg\" (UniqueName: \"kubernetes.io/projected/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-kube-api-access-5bksg\") pod \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\" (UID: \"2b640fc8-1ab1-4536-afe8-6b8c4c01c32a\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542732 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e973b8-701c-4376-9b78-c575d37f901b-operator-scripts\") pod \"33e973b8-701c-4376-9b78-c575d37f901b\" (UID: \"33e973b8-701c-4376-9b78-c575d37f901b\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542756 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2lbp\" (UniqueName: \"kubernetes.io/projected/98ebedad-42a0-4ee4-ade1-fd913bd026d6-kube-api-access-m2lbp\") pod \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\" (UID: \"98ebedad-42a0-4ee4-ade1-fd913bd026d6\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542795 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d91a3aef-64fa-4f87-a85f-64723d75894a-operator-scripts\") pod \"d91a3aef-64fa-4f87-a85f-64723d75894a\" (UID: \"d91a3aef-64fa-4f87-a85f-64723d75894a\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.542862 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e1b5a6d-bae4-4e66-9b80-1a551d907036-operator-scripts\") pod \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\" (UID: \"6e1b5a6d-bae4-4e66-9b80-1a551d907036\") " Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.544033 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0c2ad87-9313-4c2d-b691-0c907b91612a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.544049 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/883c54c6-9452-4305-94e9-13d5cefd22c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.544062 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd25d\" (UniqueName: \"kubernetes.io/projected/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-kube-api-access-nd25d\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.544074 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckb9s\" (UniqueName: \"kubernetes.io/projected/c0c2ad87-9313-4c2d-b691-0c907b91612a-kube-api-access-ckb9s\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.544128 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/965b8097-10dd-4a96-b33b-a0b6a3d5e35f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.544140 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85prb\" (UniqueName: \"kubernetes.io/projected/883c54c6-9452-4305-94e9-13d5cefd22c8-kube-api-access-85prb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.547649 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gplst"] Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.547874 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerName="dnsmasq-dns" containerID="cri-o://d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e" gracePeriod=10 Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.548791 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e973b8-701c-4376-9b78-c575d37f901b-kube-api-access-g9ld6" (OuterVolumeSpecName: "kube-api-access-g9ld6") pod "33e973b8-701c-4376-9b78-c575d37f901b" (UID: "33e973b8-701c-4376-9b78-c575d37f901b"). InnerVolumeSpecName "kube-api-access-g9ld6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.549503 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e1b5a6d-bae4-4e66-9b80-1a551d907036-kube-api-access-ptkpz" (OuterVolumeSpecName: "kube-api-access-ptkpz") pod "6e1b5a6d-bae4-4e66-9b80-1a551d907036" (UID: "6e1b5a6d-bae4-4e66-9b80-1a551d907036"). InnerVolumeSpecName "kube-api-access-ptkpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.549698 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e1b5a6d-bae4-4e66-9b80-1a551d907036-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6e1b5a6d-bae4-4e66-9b80-1a551d907036" (UID: "6e1b5a6d-bae4-4e66-9b80-1a551d907036"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.549517 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d91a3aef-64fa-4f87-a85f-64723d75894a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d91a3aef-64fa-4f87-a85f-64723d75894a" (UID: "d91a3aef-64fa-4f87-a85f-64723d75894a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.550350 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33e973b8-701c-4376-9b78-c575d37f901b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33e973b8-701c-4376-9b78-c575d37f901b" (UID: "33e973b8-701c-4376-9b78-c575d37f901b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.550581 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91a3aef-64fa-4f87-a85f-64723d75894a-kube-api-access-rb8qr" (OuterVolumeSpecName: "kube-api-access-rb8qr") pod "d91a3aef-64fa-4f87-a85f-64723d75894a" (UID: "d91a3aef-64fa-4f87-a85f-64723d75894a"). InnerVolumeSpecName "kube-api-access-rb8qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.550798 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98ebedad-42a0-4ee4-ade1-fd913bd026d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98ebedad-42a0-4ee4-ade1-fd913bd026d6" (UID: "98ebedad-42a0-4ee4-ade1-fd913bd026d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.553752 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-kube-api-access-5bksg" (OuterVolumeSpecName: "kube-api-access-5bksg") pod "2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" (UID: "2b640fc8-1ab1-4536-afe8-6b8c4c01c32a"). InnerVolumeSpecName "kube-api-access-5bksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.555379 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ebedad-42a0-4ee4-ade1-fd913bd026d6-kube-api-access-m2lbp" (OuterVolumeSpecName: "kube-api-access-m2lbp") pod "98ebedad-42a0-4ee4-ade1-fd913bd026d6" (UID: "98ebedad-42a0-4ee4-ade1-fd913bd026d6"). InnerVolumeSpecName "kube-api-access-m2lbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.558068 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" (UID: "2b640fc8-1ab1-4536-afe8-6b8c4c01c32a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645790 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6e1b5a6d-bae4-4e66-9b80-1a551d907036-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645837 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9ld6\" (UniqueName: \"kubernetes.io/projected/33e973b8-701c-4376-9b78-c575d37f901b-kube-api-access-g9ld6\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645855 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb8qr\" (UniqueName: \"kubernetes.io/projected/d91a3aef-64fa-4f87-a85f-64723d75894a-kube-api-access-rb8qr\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645864 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptkpz\" (UniqueName: \"kubernetes.io/projected/6e1b5a6d-bae4-4e66-9b80-1a551d907036-kube-api-access-ptkpz\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645875 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645886 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98ebedad-42a0-4ee4-ade1-fd913bd026d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645897 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bksg\" (UniqueName: \"kubernetes.io/projected/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a-kube-api-access-5bksg\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645907 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33e973b8-701c-4376-9b78-c575d37f901b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645917 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2lbp\" (UniqueName: \"kubernetes.io/projected/98ebedad-42a0-4ee4-ade1-fd913bd026d6-kube-api-access-m2lbp\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:38 crc kubenswrapper[4970]: I1209 12:27:38.645928 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d91a3aef-64fa-4f87-a85f-64723d75894a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.162140 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.245460 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4956" event={"ID":"a416bd4a-683a-43cf-867a-fb60427671a4","Type":"ContainerStarted","Data":"ea231bf21d8b55a2d68812cf5c45b63e8d9274f58a8b86684cf3c7f69712a852"} Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.248783 4970 generic.go:334] "Generic (PLEG): container finished" podID="5849599a-f7e9-4ea2-982c-5388be3d7e8d" containerID="4f33adc77c749231fe4ec984d3b6ec4e2697bf51c43d140d2f8744fc3441a853" exitCode=0 Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.248867 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kvgg9" event={"ID":"5849599a-f7e9-4ea2-982c-5388be3d7e8d","Type":"ContainerDied","Data":"4f33adc77c749231fe4ec984d3b6ec4e2697bf51c43d140d2f8744fc3441a853"} Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.251903 4970 generic.go:334] "Generic (PLEG): container finished" podID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerID="d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e" exitCode=0 Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.251952 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252007 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" event={"ID":"41231f9e-fef3-4e77-8f93-98224df06d8a","Type":"ContainerDied","Data":"d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e"} Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252024 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4118-account-create-update-bzvb2" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252044 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-gplst" event={"ID":"41231f9e-fef3-4e77-8f93-98224df06d8a","Type":"ContainerDied","Data":"c7287b92d042e656dec3f6c0dcbe3dcd848e8ec7e10ee9beabbcb7cdb59da858"} Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252065 4970 scope.go:117] "RemoveContainer" containerID="d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252203 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-3015-account-create-update-xs7kl" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252215 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-69sxs" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252232 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-68fgf" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252260 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-67k7r" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252267 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-j77r5" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252283 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-618b-account-create-update-k6f7v" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.252304 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-15fb-account-create-update-nrf89" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.266926 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-sb\") pod \"41231f9e-fef3-4e77-8f93-98224df06d8a\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.267017 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-config\") pod \"41231f9e-fef3-4e77-8f93-98224df06d8a\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.267052 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-nb\") pod \"41231f9e-fef3-4e77-8f93-98224df06d8a\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.267140 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qnw\" (UniqueName: \"kubernetes.io/projected/41231f9e-fef3-4e77-8f93-98224df06d8a-kube-api-access-t7qnw\") pod \"41231f9e-fef3-4e77-8f93-98224df06d8a\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.391390 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-dns-svc\") pod \"41231f9e-fef3-4e77-8f93-98224df06d8a\" (UID: \"41231f9e-fef3-4e77-8f93-98224df06d8a\") " Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.508103 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41231f9e-fef3-4e77-8f93-98224df06d8a-kube-api-access-t7qnw" (OuterVolumeSpecName: "kube-api-access-t7qnw") pod "41231f9e-fef3-4e77-8f93-98224df06d8a" (UID: "41231f9e-fef3-4e77-8f93-98224df06d8a"). InnerVolumeSpecName "kube-api-access-t7qnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.546919 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "41231f9e-fef3-4e77-8f93-98224df06d8a" (UID: "41231f9e-fef3-4e77-8f93-98224df06d8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.556889 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "41231f9e-fef3-4e77-8f93-98224df06d8a" (UID: "41231f9e-fef3-4e77-8f93-98224df06d8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.563135 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-config" (OuterVolumeSpecName: "config") pod "41231f9e-fef3-4e77-8f93-98224df06d8a" (UID: "41231f9e-fef3-4e77-8f93-98224df06d8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.564321 4970 scope.go:117] "RemoveContainer" containerID="95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.567368 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "41231f9e-fef3-4e77-8f93-98224df06d8a" (UID: "41231f9e-fef3-4e77-8f93-98224df06d8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.595339 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.595376 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.595391 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qnw\" (UniqueName: \"kubernetes.io/projected/41231f9e-fef3-4e77-8f93-98224df06d8a-kube-api-access-t7qnw\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.595403 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.595418 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/41231f9e-fef3-4e77-8f93-98224df06d8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.647187 4970 scope.go:117] "RemoveContainer" containerID="d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e" Dec 09 12:27:39 crc kubenswrapper[4970]: E1209 12:27:39.647795 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e\": container with ID starting with d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e not found: ID does not exist" containerID="d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.647939 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e"} err="failed to get container status \"d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e\": rpc error: code = NotFound desc = could not find container \"d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e\": container with ID starting with d4493c02c3b884d2ce90bf30b914adfd09ef6decca4768fa4a0aa7cabebffa3e not found: ID does not exist" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.647976 4970 scope.go:117] "RemoveContainer" containerID="95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0" Dec 09 12:27:39 crc kubenswrapper[4970]: E1209 12:27:39.648406 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0\": container with ID starting with 95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0 not found: ID does not exist" containerID="95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.648439 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0"} err="failed to get container status \"95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0\": rpc error: code = NotFound desc = could not find container \"95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0\": container with ID starting with 95e54837379d2e32f2c73359028b159e8ea1e2e71f4062ca7ed901cd2799cea0 not found: ID does not exist" Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.883788 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gplst"] Dec 09 12:27:39 crc kubenswrapper[4970]: I1209 12:27:39.895717 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-gplst"] Dec 09 12:27:41 crc kubenswrapper[4970]: I1209 12:27:41.826070 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" path="/var/lib/kubelet/pods/41231f9e-fef3-4e77-8f93-98224df06d8a/volumes" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.762609 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.869865 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-dispersionconf\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.869967 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/5849599a-f7e9-4ea2-982c-5388be3d7e8d-kube-api-access-fcp8s\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.870008 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-swiftconf\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.870041 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-ring-data-devices\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.870074 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5849599a-f7e9-4ea2-982c-5388be3d7e8d-etc-swift\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.870149 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-scripts\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.870648 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.870993 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5849599a-f7e9-4ea2-982c-5388be3d7e8d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.871024 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-combined-ca-bundle\") pod \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\" (UID: \"5849599a-f7e9-4ea2-982c-5388be3d7e8d\") " Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.872568 4970 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.872599 4970 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5849599a-f7e9-4ea2-982c-5388be3d7e8d-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.885448 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.895682 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-scripts" (OuterVolumeSpecName: "scripts") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.896007 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.897369 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5849599a-f7e9-4ea2-982c-5388be3d7e8d-kube-api-access-fcp8s" (OuterVolumeSpecName: "kube-api-access-fcp8s") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "kube-api-access-fcp8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.898014 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.929172 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5849599a-f7e9-4ea2-982c-5388be3d7e8d" (UID: "5849599a-f7e9-4ea2-982c-5388be3d7e8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.975515 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5849599a-f7e9-4ea2-982c-5388be3d7e8d-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.975558 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.975570 4970 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.975581 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcp8s\" (UniqueName: \"kubernetes.io/projected/5849599a-f7e9-4ea2-982c-5388be3d7e8d-kube-api-access-fcp8s\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:42 crc kubenswrapper[4970]: I1209 12:27:42.975591 4970 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5849599a-f7e9-4ea2-982c-5388be3d7e8d-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:43 crc kubenswrapper[4970]: I1209 12:27:43.301012 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kvgg9" event={"ID":"5849599a-f7e9-4ea2-982c-5388be3d7e8d","Type":"ContainerDied","Data":"85c32b009a19b6d9232d49b0e50e78aac4a2db4990b68383a33c25223d10131d"} Dec 09 12:27:43 crc kubenswrapper[4970]: I1209 12:27:43.301053 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85c32b009a19b6d9232d49b0e50e78aac4a2db4990b68383a33c25223d10131d" Dec 09 12:27:43 crc kubenswrapper[4970]: I1209 12:27:43.301153 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kvgg9" Dec 09 12:27:44 crc kubenswrapper[4970]: I1209 12:27:44.519637 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:44 crc kubenswrapper[4970]: I1209 12:27:44.522432 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:45 crc kubenswrapper[4970]: I1209 12:27:45.327861 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:46 crc kubenswrapper[4970]: I1209 12:27:46.010858 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:27:46 crc kubenswrapper[4970]: I1209 12:27:46.011157 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:27:47 crc kubenswrapper[4970]: I1209 12:27:47.385182 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hqfv8" podUID="c06ee73b-4168-4ef3-b268-db5e976febbf" containerName="ovn-controller" probeResult="failure" output=< Dec 09 12:27:47 crc kubenswrapper[4970]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 12:27:47 crc kubenswrapper[4970]: > Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.103492 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.104440 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="prometheus" containerID="cri-o://f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2" gracePeriod=600 Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.104696 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="config-reloader" containerID="cri-o://e2a51fc274763607afb4ba3d16bb14d65df8303b050c96782e3f97b93d517443" gracePeriod=600 Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.105031 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="thanos-sidecar" containerID="cri-o://35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc" gracePeriod=600 Dec 09 12:27:48 crc kubenswrapper[4970]: E1209 12:27:48.264612 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-conmon-f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-conmon-35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:27:48 crc kubenswrapper[4970]: E1209 12:27:48.264699 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-conmon-f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-conmon-35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb3e4622_df1b_4d70_8683_8672cebf6666.slice/crio-f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.364150 4970 generic.go:334] "Generic (PLEG): container finished" podID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerID="35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc" exitCode=0 Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.364504 4970 generic.go:334] "Generic (PLEG): container finished" podID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerID="f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2" exitCode=0 Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.364277 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerDied","Data":"35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc"} Dec 09 12:27:48 crc kubenswrapper[4970]: I1209 12:27:48.364545 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerDied","Data":"f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2"} Dec 09 12:27:49 crc kubenswrapper[4970]: I1209 12:27:49.377649 4970 generic.go:334] "Generic (PLEG): container finished" podID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerID="e2a51fc274763607afb4ba3d16bb14d65df8303b050c96782e3f97b93d517443" exitCode=0 Dec 09 12:27:49 crc kubenswrapper[4970]: I1209 12:27:49.377718 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerDied","Data":"e2a51fc274763607afb4ba3d16bb14d65df8303b050c96782e3f97b93d517443"} Dec 09 12:27:49 crc kubenswrapper[4970]: I1209 12:27:49.520776 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.136:9090/-/ready\": dial tcp 10.217.0.136:9090: connect: connection refused" Dec 09 12:27:51 crc kubenswrapper[4970]: I1209 12:27:51.051297 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:51 crc kubenswrapper[4970]: I1209 12:27:51.061013 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/bee29a58-7867-4543-bb4e-c19528625b1a-etc-swift\") pod \"swift-storage-0\" (UID: \"bee29a58-7867-4543-bb4e-c19528625b1a\") " pod="openstack/swift-storage-0" Dec 09 12:27:51 crc kubenswrapper[4970]: I1209 12:27:51.122192 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 12:27:52 crc kubenswrapper[4970]: I1209 12:27:52.368399 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hqfv8" podUID="c06ee73b-4168-4ef3-b268-db5e976febbf" containerName="ovn-controller" probeResult="failure" output=< Dec 09 12:27:52 crc kubenswrapper[4970]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 12:27:52 crc kubenswrapper[4970]: > Dec 09 12:27:54 crc kubenswrapper[4970]: E1209 12:27:54.413132 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 09 12:27:54 crc kubenswrapper[4970]: E1209 12:27:54.413978 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6zwkw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-qvmmx_openstack(de81c9d6-10bd-46d2-ab77-74463359dc5a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:27:54 crc kubenswrapper[4970]: E1209 12:27:54.415230 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-qvmmx" podUID="de81c9d6-10bd-46d2-ab77-74463359dc5a" Dec 09 12:27:54 crc kubenswrapper[4970]: I1209 12:27:54.520611 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.136:9090/-/ready\": dial tcp 10.217.0.136:9090: connect: connection refused" Dec 09 12:27:55 crc kubenswrapper[4970]: E1209 12:27:55.514046 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-qvmmx" podUID="de81c9d6-10bd-46d2-ab77-74463359dc5a" Dec 09 12:27:55 crc kubenswrapper[4970]: I1209 12:27:55.954852 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.057831 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.057876 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-config\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.057893 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb3e4622-df1b-4d70-8683-8672cebf6666-config-out\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.057939 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj6zc\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-kube-api-access-jj6zc\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.057980 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-tls-assets\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.057997 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-thanos-prometheus-http-client-file\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.058098 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-web-config\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.058144 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb3e4622-df1b-4d70-8683-8672cebf6666-prometheus-metric-storage-rulefiles-0\") pod \"bb3e4622-df1b-4d70-8683-8672cebf6666\" (UID: \"bb3e4622-df1b-4d70-8683-8672cebf6666\") " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.059822 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb3e4622-df1b-4d70-8683-8672cebf6666-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.065914 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.069303 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-kube-api-access-jj6zc" (OuterVolumeSpecName: "kube-api-access-jj6zc") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "kube-api-access-jj6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.074298 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.081316 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.091196 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-config" (OuterVolumeSpecName: "config") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.092018 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3e4622-df1b-4d70-8683-8672cebf6666-config-out" (OuterVolumeSpecName: "config-out") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.108868 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-web-config" (OuterVolumeSpecName: "web-config") pod "bb3e4622-df1b-4d70-8683-8672cebf6666" (UID: "bb3e4622-df1b-4d70-8683-8672cebf6666"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160638 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160715 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160730 4970 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/bb3e4622-df1b-4d70-8683-8672cebf6666-config-out\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160743 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj6zc\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-kube-api-access-jj6zc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160755 4970 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/bb3e4622-df1b-4d70-8683-8672cebf6666-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160771 4970 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160783 4970 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/bb3e4622-df1b-4d70-8683-8672cebf6666-web-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.160795 4970 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/bb3e4622-df1b-4d70-8683-8672cebf6666-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.206457 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 09 12:27:56 crc kubenswrapper[4970]: W1209 12:27:56.212339 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbee29a58_7867_4543_bb4e_c19528625b1a.slice/crio-3e3061884c66b9f8aed20ff6570c2b915481136c43f394d10b3b874213ae8ad1 WatchSource:0}: Error finding container 3e3061884c66b9f8aed20ff6570c2b915481136c43f394d10b3b874213ae8ad1: Status 404 returned error can't find the container with id 3e3061884c66b9f8aed20ff6570c2b915481136c43f394d10b3b874213ae8ad1 Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.214663 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.262621 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.453290 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"3e3061884c66b9f8aed20ff6570c2b915481136c43f394d10b3b874213ae8ad1"} Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.454938 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4956" event={"ID":"a416bd4a-683a-43cf-867a-fb60427671a4","Type":"ContainerStarted","Data":"4b94b6459129954a1cb2c6e4fd36ce3dc3d56ffab0e838cd5aa32637d243acf1"} Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.458112 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"bb3e4622-df1b-4d70-8683-8672cebf6666","Type":"ContainerDied","Data":"39dfccdecc60ea150fb4f5a123d301a663b23ced22358c66148a8ce2dad9d073"} Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.458165 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.458168 4970 scope.go:117] "RemoveContainer" containerID="35636cffccb3859905b72772fd157d9945aa3aace4a4e5a093020bac53bccfdc" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.478869 4970 scope.go:117] "RemoveContainer" containerID="e2a51fc274763607afb4ba3d16bb14d65df8303b050c96782e3f97b93d517443" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.500096 4970 scope.go:117] "RemoveContainer" containerID="f331687d26cc24b9581e95676d070c048c9d634f0423061560ef32cf2f81f0b2" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.500575 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.515556 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.524612 4970 scope.go:117] "RemoveContainer" containerID="c06514286bb111c9e0be623a217a45c85650f86040dfd0c6d5135e2298a5d7a4" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.532456 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.532984 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e973b8-701c-4376-9b78-c575d37f901b" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533010 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e973b8-701c-4376-9b78-c575d37f901b" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533033 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="965b8097-10dd-4a96-b33b-a0b6a3d5e35f" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533041 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="965b8097-10dd-4a96-b33b-a0b6a3d5e35f" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533060 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="prometheus" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533068 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="prometheus" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533079 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533086 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533114 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerName="init" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533121 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerName="init" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533139 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5849599a-f7e9-4ea2-982c-5388be3d7e8d" containerName="swift-ring-rebalance" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533148 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5849599a-f7e9-4ea2-982c-5388be3d7e8d" containerName="swift-ring-rebalance" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533162 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e1b5a6d-bae4-4e66-9b80-1a551d907036" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533169 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e1b5a6d-bae4-4e66-9b80-1a551d907036" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533179 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="thanos-sidecar" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533186 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="thanos-sidecar" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533200 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91a3aef-64fa-4f87-a85f-64723d75894a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533208 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91a3aef-64fa-4f87-a85f-64723d75894a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533220 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883c54c6-9452-4305-94e9-13d5cefd22c8" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533227 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="883c54c6-9452-4305-94e9-13d5cefd22c8" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533262 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="init-config-reloader" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533273 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="init-config-reloader" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533281 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ebedad-42a0-4ee4-ade1-fd913bd026d6" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533288 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ebedad-42a0-4ee4-ade1-fd913bd026d6" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533304 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c2ad87-9313-4c2d-b691-0c907b91612a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533312 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c2ad87-9313-4c2d-b691-0c907b91612a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533323 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerName="dnsmasq-dns" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533330 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerName="dnsmasq-dns" Dec 09 12:27:56 crc kubenswrapper[4970]: E1209 12:27:56.533351 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="config-reloader" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533359 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="config-reloader" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533586 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c2ad87-9313-4c2d-b691-0c907b91612a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533607 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ebedad-42a0-4ee4-ade1-fd913bd026d6" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533628 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="prometheus" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533639 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e1b5a6d-bae4-4e66-9b80-1a551d907036" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533652 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="965b8097-10dd-4a96-b33b-a0b6a3d5e35f" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533661 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="5849599a-f7e9-4ea2-982c-5388be3d7e8d" containerName="swift-ring-rebalance" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533671 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533683 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="thanos-sidecar" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533701 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="883c54c6-9452-4305-94e9-13d5cefd22c8" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533716 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e973b8-701c-4376-9b78-c575d37f901b" containerName="mariadb-account-create-update" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533733 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91a3aef-64fa-4f87-a85f-64723d75894a" containerName="mariadb-database-create" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533746 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="41231f9e-fef3-4e77-8f93-98224df06d8a" containerName="dnsmasq-dns" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.533757 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" containerName="config-reloader" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.540611 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.545331 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-fghm5" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.545537 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.545693 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.545862 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.546644 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.547115 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.550885 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.552754 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683629 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e993a7f-0aee-41c5-adb3-a3becd49066f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683698 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e993a7f-0aee-41c5-adb3-a3becd49066f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683789 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683838 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683892 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e993a7f-0aee-41c5-adb3-a3becd49066f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683931 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683950 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683965 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpbk\" (UniqueName: \"kubernetes.io/projected/3e993a7f-0aee-41c5-adb3-a3becd49066f-kube-api-access-fwpbk\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.683989 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.684007 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.684029 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785553 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e993a7f-0aee-41c5-adb3-a3becd49066f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785647 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e993a7f-0aee-41c5-adb3-a3becd49066f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785749 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785789 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785847 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e993a7f-0aee-41c5-adb3-a3becd49066f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785922 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785959 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.785983 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpbk\" (UniqueName: \"kubernetes.io/projected/3e993a7f-0aee-41c5-adb3-a3becd49066f-kube-api-access-fwpbk\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.786011 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.786040 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.787485 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.788306 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3e993a7f-0aee-41c5-adb3-a3becd49066f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.791171 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3e993a7f-0aee-41c5-adb3-a3becd49066f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.792368 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.792752 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.792930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3e993a7f-0aee-41c5-adb3-a3becd49066f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.793612 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.794150 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.794936 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.796466 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3e993a7f-0aee-41c5-adb3-a3becd49066f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.806461 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpbk\" (UniqueName: \"kubernetes.io/projected/3e993a7f-0aee-41c5-adb3-a3becd49066f-kube-api-access-fwpbk\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.819944 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"prometheus-metric-storage-0\" (UID: \"3e993a7f-0aee-41c5-adb3-a3becd49066f\") " pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:56 crc kubenswrapper[4970]: I1209 12:27:56.937212 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.372698 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-hqfv8" podUID="c06ee73b-4168-4ef3-b268-db5e976febbf" containerName="ovn-controller" probeResult="failure" output=< Dec 09 12:27:57 crc kubenswrapper[4970]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 12:27:57 crc kubenswrapper[4970]: > Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.417755 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.430630 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-d9vtq" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.471969 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.501349 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-k4956" podStartSLOduration=4.41466719 podStartE2EDuration="21.501332662s" podCreationTimestamp="2025-12-09 12:27:36 +0000 UTC" firstStartedPulling="2025-12-09 12:27:38.479085581 +0000 UTC m=+1271.039566642" lastFinishedPulling="2025-12-09 12:27:55.565751063 +0000 UTC m=+1288.126232114" observedRunningTime="2025-12-09 12:27:57.494164018 +0000 UTC m=+1290.054645069" watchObservedRunningTime="2025-12-09 12:27:57.501332662 +0000 UTC m=+1290.061813713" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.667178 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-hqfv8-config-km75f"] Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.671439 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.674199 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.681491 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqfv8-config-km75f"] Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.828563 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3e4622-df1b-4d70-8683-8672cebf6666" path="/var/lib/kubelet/pods/bb3e4622-df1b-4d70-8683-8672cebf6666/volumes" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.832740 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run-ovn\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.832828 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.832854 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-additional-scripts\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.832924 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-scripts\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.832959 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-log-ovn\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.832985 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnrpv\" (UniqueName: \"kubernetes.io/projected/6917425b-2a6f-47ff-9484-b2d04b9de99f-kube-api-access-vnrpv\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.934546 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-scripts\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.934630 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-log-ovn\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.934658 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnrpv\" (UniqueName: \"kubernetes.io/projected/6917425b-2a6f-47ff-9484-b2d04b9de99f-kube-api-access-vnrpv\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.934705 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run-ovn\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.934802 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.934826 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-additional-scripts\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.935009 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-log-ovn\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.935457 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.935515 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run-ovn\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.935581 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-additional-scripts\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.937079 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-scripts\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:57 crc kubenswrapper[4970]: I1209 12:27:57.951513 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnrpv\" (UniqueName: \"kubernetes.io/projected/6917425b-2a6f-47ff-9484-b2d04b9de99f-kube-api-access-vnrpv\") pod \"ovn-controller-hqfv8-config-km75f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:58 crc kubenswrapper[4970]: I1209 12:27:58.030714 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:27:58 crc kubenswrapper[4970]: I1209 12:27:58.389819 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-hqfv8-config-km75f"] Dec 09 12:27:58 crc kubenswrapper[4970]: W1209 12:27:58.392319 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6917425b_2a6f_47ff_9484_b2d04b9de99f.slice/crio-73457c99d1377d762e5e2b03cdcfe01d4d968bad6da8bd0ebd608873fc30e228 WatchSource:0}: Error finding container 73457c99d1377d762e5e2b03cdcfe01d4d968bad6da8bd0ebd608873fc30e228: Status 404 returned error can't find the container with id 73457c99d1377d762e5e2b03cdcfe01d4d968bad6da8bd0ebd608873fc30e228 Dec 09 12:27:58 crc kubenswrapper[4970]: I1209 12:27:58.490291 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqfv8-config-km75f" event={"ID":"6917425b-2a6f-47ff-9484-b2d04b9de99f","Type":"ContainerStarted","Data":"73457c99d1377d762e5e2b03cdcfe01d4d968bad6da8bd0ebd608873fc30e228"} Dec 09 12:27:58 crc kubenswrapper[4970]: I1209 12:27:58.491937 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e993a7f-0aee-41c5-adb3-a3becd49066f","Type":"ContainerStarted","Data":"0bbc32fc98d34c71e88381d299770ad598de32b669a3d5abc9f85d2de3f8f860"} Dec 09 12:27:58 crc kubenswrapper[4970]: I1209 12:27:58.496901 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"5922f59750dac547571bc99d8abb998a4eb20ec87eb108b2725a417bc807f0c7"} Dec 09 12:27:58 crc kubenswrapper[4970]: I1209 12:27:58.496952 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"e63c8c811a0073d5c20ec4387c2faff96ad533c3ae73455037ec711d1969e994"} Dec 09 12:27:59 crc kubenswrapper[4970]: I1209 12:27:59.511707 4970 generic.go:334] "Generic (PLEG): container finished" podID="6917425b-2a6f-47ff-9484-b2d04b9de99f" containerID="922d19a539a2a759d1325ebec5dc39d38028ddf461b942be19c26a3e816870d4" exitCode=0 Dec 09 12:27:59 crc kubenswrapper[4970]: I1209 12:27:59.511825 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqfv8-config-km75f" event={"ID":"6917425b-2a6f-47ff-9484-b2d04b9de99f","Type":"ContainerDied","Data":"922d19a539a2a759d1325ebec5dc39d38028ddf461b942be19c26a3e816870d4"} Dec 09 12:27:59 crc kubenswrapper[4970]: I1209 12:27:59.516721 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"4c68d3b71a61a3ecbfd6fb03bf74fe57b80987ab6ce718eb73d10328ba578a3d"} Dec 09 12:27:59 crc kubenswrapper[4970]: I1209 12:27:59.516762 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"b83632a04babf8d5299d016bcff27f7b235eb145e0d543b90ac5ccdce094ea82"} Dec 09 12:28:00 crc kubenswrapper[4970]: I1209 12:28:00.529792 4970 generic.go:334] "Generic (PLEG): container finished" podID="a416bd4a-683a-43cf-867a-fb60427671a4" containerID="4b94b6459129954a1cb2c6e4fd36ce3dc3d56ffab0e838cd5aa32637d243acf1" exitCode=0 Dec 09 12:28:00 crc kubenswrapper[4970]: I1209 12:28:00.529851 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4956" event={"ID":"a416bd4a-683a-43cf-867a-fb60427671a4","Type":"ContainerDied","Data":"4b94b6459129954a1cb2c6e4fd36ce3dc3d56ffab0e838cd5aa32637d243acf1"} Dec 09 12:28:00 crc kubenswrapper[4970]: I1209 12:28:00.961015 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100364 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-additional-scripts\") pod \"6917425b-2a6f-47ff-9484-b2d04b9de99f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100539 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-log-ovn\") pod \"6917425b-2a6f-47ff-9484-b2d04b9de99f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100571 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run-ovn\") pod \"6917425b-2a6f-47ff-9484-b2d04b9de99f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100629 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run\") pod \"6917425b-2a6f-47ff-9484-b2d04b9de99f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100615 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6917425b-2a6f-47ff-9484-b2d04b9de99f" (UID: "6917425b-2a6f-47ff-9484-b2d04b9de99f"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100662 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-scripts\") pod \"6917425b-2a6f-47ff-9484-b2d04b9de99f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100691 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6917425b-2a6f-47ff-9484-b2d04b9de99f" (UID: "6917425b-2a6f-47ff-9484-b2d04b9de99f"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100718 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run" (OuterVolumeSpecName: "var-run") pod "6917425b-2a6f-47ff-9484-b2d04b9de99f" (UID: "6917425b-2a6f-47ff-9484-b2d04b9de99f"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.100772 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnrpv\" (UniqueName: \"kubernetes.io/projected/6917425b-2a6f-47ff-9484-b2d04b9de99f-kube-api-access-vnrpv\") pod \"6917425b-2a6f-47ff-9484-b2d04b9de99f\" (UID: \"6917425b-2a6f-47ff-9484-b2d04b9de99f\") " Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.101162 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6917425b-2a6f-47ff-9484-b2d04b9de99f" (UID: "6917425b-2a6f-47ff-9484-b2d04b9de99f"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.101509 4970 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.101530 4970 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.101538 4970 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.101547 4970 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6917425b-2a6f-47ff-9484-b2d04b9de99f-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.101621 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-scripts" (OuterVolumeSpecName: "scripts") pod "6917425b-2a6f-47ff-9484-b2d04b9de99f" (UID: "6917425b-2a6f-47ff-9484-b2d04b9de99f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.107670 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6917425b-2a6f-47ff-9484-b2d04b9de99f-kube-api-access-vnrpv" (OuterVolumeSpecName: "kube-api-access-vnrpv") pod "6917425b-2a6f-47ff-9484-b2d04b9de99f" (UID: "6917425b-2a6f-47ff-9484-b2d04b9de99f"). InnerVolumeSpecName "kube-api-access-vnrpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.203438 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6917425b-2a6f-47ff-9484-b2d04b9de99f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.203490 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnrpv\" (UniqueName: \"kubernetes.io/projected/6917425b-2a6f-47ff-9484-b2d04b9de99f-kube-api-access-vnrpv\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.542719 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"4d14ad2ce299a658fd885c591d2e0d07c0523736fa3f3131208e3867677d1741"} Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.543088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"ac8331a601f14169de060c700314ffe0f8fc14e3ae024973ecae14c6abdcc94f"} Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.543105 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"1ef2f53657952bef2ecd0a9ec784cde290b3eb8bc4000e63067064de227b0489"} Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.543115 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"34e1a0806158a3697e0667338cfa9553d007a32d48427cb1bb5d315eeed0f20d"} Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.544587 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-hqfv8-config-km75f" event={"ID":"6917425b-2a6f-47ff-9484-b2d04b9de99f","Type":"ContainerDied","Data":"73457c99d1377d762e5e2b03cdcfe01d4d968bad6da8bd0ebd608873fc30e228"} Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.544620 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73457c99d1377d762e5e2b03cdcfe01d4d968bad6da8bd0ebd608873fc30e228" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.544722 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-hqfv8-config-km75f" Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.548073 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e993a7f-0aee-41c5-adb3-a3becd49066f","Type":"ContainerStarted","Data":"ec0b1ac6d87dc8b7048bbb5706bc40af14a490af4384ad21eeca8bc4a791dcd8"} Dec 09 12:28:01 crc kubenswrapper[4970]: I1209 12:28:01.908833 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4956" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.019839 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-combined-ca-bundle\") pod \"a416bd4a-683a-43cf-867a-fb60427671a4\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.021072 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-config-data\") pod \"a416bd4a-683a-43cf-867a-fb60427671a4\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.021118 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rxq2\" (UniqueName: \"kubernetes.io/projected/a416bd4a-683a-43cf-867a-fb60427671a4-kube-api-access-9rxq2\") pod \"a416bd4a-683a-43cf-867a-fb60427671a4\" (UID: \"a416bd4a-683a-43cf-867a-fb60427671a4\") " Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.026457 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a416bd4a-683a-43cf-867a-fb60427671a4-kube-api-access-9rxq2" (OuterVolumeSpecName: "kube-api-access-9rxq2") pod "a416bd4a-683a-43cf-867a-fb60427671a4" (UID: "a416bd4a-683a-43cf-867a-fb60427671a4"). InnerVolumeSpecName "kube-api-access-9rxq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.069122 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a416bd4a-683a-43cf-867a-fb60427671a4" (UID: "a416bd4a-683a-43cf-867a-fb60427671a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.080085 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-hqfv8-config-km75f"] Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.089382 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-hqfv8-config-km75f"] Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.105763 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-config-data" (OuterVolumeSpecName: "config-data") pod "a416bd4a-683a-43cf-867a-fb60427671a4" (UID: "a416bd4a-683a-43cf-867a-fb60427671a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.123980 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.124013 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a416bd4a-683a-43cf-867a-fb60427671a4-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.124024 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rxq2\" (UniqueName: \"kubernetes.io/projected/a416bd4a-683a-43cf-867a-fb60427671a4-kube-api-access-9rxq2\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.383923 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-hqfv8" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.589184 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-k4956" event={"ID":"a416bd4a-683a-43cf-867a-fb60427671a4","Type":"ContainerDied","Data":"ea231bf21d8b55a2d68812cf5c45b63e8d9274f58a8b86684cf3c7f69712a852"} Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.589206 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-k4956" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.589224 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea231bf21d8b55a2d68812cf5c45b63e8d9274f58a8b86684cf3c7f69712a852" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.802872 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9txbz"] Dec 09 12:28:02 crc kubenswrapper[4970]: E1209 12:28:02.803638 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a416bd4a-683a-43cf-867a-fb60427671a4" containerName="keystone-db-sync" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.803657 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a416bd4a-683a-43cf-867a-fb60427671a4" containerName="keystone-db-sync" Dec 09 12:28:02 crc kubenswrapper[4970]: E1209 12:28:02.803692 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6917425b-2a6f-47ff-9484-b2d04b9de99f" containerName="ovn-config" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.803701 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6917425b-2a6f-47ff-9484-b2d04b9de99f" containerName="ovn-config" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.803882 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a416bd4a-683a-43cf-867a-fb60427671a4" containerName="keystone-db-sync" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.803904 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6917425b-2a6f-47ff-9484-b2d04b9de99f" containerName="ovn-config" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.805010 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.851394 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9txbz"] Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.912404 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wqsvd"] Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.965465 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.965511 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-dns-svc\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.968960 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4tm\" (UniqueName: \"kubernetes.io/projected/b3aff5ba-914e-41db-af59-7568f0e5ff5e-kube-api-access-sg4tm\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.971089 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.974349 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.974569 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m6jjm" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.977913 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.978214 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.978962 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wqsvd"] Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.979164 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.982768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-config\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:02 crc kubenswrapper[4970]: I1209 12:28:02.980065 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.005785 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-kg6zl"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.007687 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.017189 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.017528 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-lp2xp" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.017624 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kg6zl"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.045899 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cltzd"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.049639 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cltzd"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.049769 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.071668 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.071891 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrdl8" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.072048 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088445 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-config-data\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088495 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sxtn\" (UniqueName: \"kubernetes.io/projected/6b557cf4-3787-453c-a1f5-813bb5c50a04-kube-api-access-8sxtn\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088518 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-credential-keys\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088542 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4tm\" (UniqueName: \"kubernetes.io/projected/b3aff5ba-914e-41db-af59-7568f0e5ff5e-kube-api-access-sg4tm\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088564 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-combined-ca-bundle\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088589 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088622 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-config\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088644 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-scripts\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088674 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088690 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-combined-ca-bundle\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088707 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-dns-svc\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088727 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk7r4\" (UniqueName: \"kubernetes.io/projected/3015f85e-5d86-4906-9d9a-8389330bcb82-kube-api-access-xk7r4\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088747 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-fernet-keys\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.088834 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-config-data\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.090161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.090769 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-config\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.091190 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.098263 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-dns-svc\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.138514 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4tm\" (UniqueName: \"kubernetes.io/projected/b3aff5ba-914e-41db-af59-7568f0e5ff5e-kube-api-access-sg4tm\") pod \"dnsmasq-dns-f877ddd87-9txbz\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.164991 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.190851 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-config-data\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.190923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg2bl\" (UniqueName: \"kubernetes.io/projected/222db933-1bf5-4df0-aa84-362453b9ba35-kube-api-access-xg2bl\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.190958 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-config-data\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.190983 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sxtn\" (UniqueName: \"kubernetes.io/projected/6b557cf4-3787-453c-a1f5-813bb5c50a04-kube-api-access-8sxtn\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.190999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-credential-keys\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191023 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-combined-ca-bundle\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191081 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-scripts\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191119 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-combined-ca-bundle\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191141 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-combined-ca-bundle\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191159 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-config\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191183 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk7r4\" (UniqueName: \"kubernetes.io/projected/3015f85e-5d86-4906-9d9a-8389330bcb82-kube-api-access-xk7r4\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.191200 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-fernet-keys\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.199658 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9txbz"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.202218 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-fernet-keys\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.208808 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-combined-ca-bundle\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.209831 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-kz24r"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.212204 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.213076 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-scripts\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.213316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-config-data\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.218858 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-config-data\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.220938 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-credential-keys\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.228363 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-combined-ca-bundle\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.240074 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sxtn\" (UniqueName: \"kubernetes.io/projected/6b557cf4-3787-453c-a1f5-813bb5c50a04-kube-api-access-8sxtn\") pod \"keystone-bootstrap-wqsvd\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.242119 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk7r4\" (UniqueName: \"kubernetes.io/projected/3015f85e-5d86-4906-9d9a-8389330bcb82-kube-api-access-xk7r4\") pod \"heat-db-sync-kg6zl\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.251999 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-kz24r"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.264694 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-rxs6p"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.266018 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.271310 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-849mw"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.275660 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.279008 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.280162 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.280352 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ghltv" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.281385 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ns52p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.281526 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300318 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg2bl\" (UniqueName: \"kubernetes.io/projected/222db933-1bf5-4df0-aa84-362453b9ba35-kube-api-access-xg2bl\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300387 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-config\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300444 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300623 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-combined-ca-bundle\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300658 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-config\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300689 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300747 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngxc\" (UniqueName: \"kubernetes.io/projected/47def595-51d1-4f88-9536-cf41061aae5e-kube-api-access-dngxc\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.300772 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.305288 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-combined-ca-bundle\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.315227 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-config\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.335458 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rxs6p"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.343500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg2bl\" (UniqueName: \"kubernetes.io/projected/222db933-1bf5-4df0-aa84-362453b9ba35-kube-api-access-xg2bl\") pod \"neutron-db-sync-cltzd\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.377431 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.389319 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-849mw"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416053 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-db-sync-config-data\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416104 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416146 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-combined-ca-bundle\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416271 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-combined-ca-bundle\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416302 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsh4m\" (UniqueName: \"kubernetes.io/projected/973c0567-09c0-4313-8c9f-ee74a3188226-kube-api-access-bsh4m\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416326 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngxc\" (UniqueName: \"kubernetes.io/projected/47def595-51d1-4f88-9536-cf41061aae5e-kube-api-access-dngxc\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416367 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416452 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-config-data\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416522 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43ba192b-df87-4028-a33e-4ff96d287644-etc-machine-id\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416598 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwtdz\" (UniqueName: \"kubernetes.io/projected/43ba192b-df87-4028-a33e-4ff96d287644-kube-api-access-bwtdz\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416647 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-config\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416724 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-db-sync-config-data\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416756 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.416789 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-scripts\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.427330 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kg6zl" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.431355 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.431380 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-config\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.432040 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.434355 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.469418 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngxc\" (UniqueName: \"kubernetes.io/projected/47def595-51d1-4f88-9536-cf41061aae5e-kube-api-access-dngxc\") pod \"dnsmasq-dns-68dcc9cf6f-kz24r\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.489810 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.509496 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2bmkk"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.510954 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.513823 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.515378 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.517782 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hqs7r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528077 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwtdz\" (UniqueName: \"kubernetes.io/projected/43ba192b-df87-4028-a33e-4ff96d287644-kube-api-access-bwtdz\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528175 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-db-sync-config-data\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528217 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-scripts\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528310 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-db-sync-config-data\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528335 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-combined-ca-bundle\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528396 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-combined-ca-bundle\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528424 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsh4m\" (UniqueName: \"kubernetes.io/projected/973c0567-09c0-4313-8c9f-ee74a3188226-kube-api-access-bsh4m\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528487 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-config-data\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43ba192b-df87-4028-a33e-4ff96d287644-etc-machine-id\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.528634 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43ba192b-df87-4028-a33e-4ff96d287644-etc-machine-id\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.540573 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-scripts\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.545041 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-db-sync-config-data\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.547982 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-db-sync-config-data\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.548445 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-combined-ca-bundle\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.548852 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsh4m\" (UniqueName: \"kubernetes.io/projected/973c0567-09c0-4313-8c9f-ee74a3188226-kube-api-access-bsh4m\") pod \"barbican-db-sync-rxs6p\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.552869 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwtdz\" (UniqueName: \"kubernetes.io/projected/43ba192b-df87-4028-a33e-4ff96d287644-kube-api-access-bwtdz\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.552882 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-config-data\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.555904 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-combined-ca-bundle\") pod \"cinder-db-sync-849mw\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.583237 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2bmkk"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.631005 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-scripts\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.631055 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-combined-ca-bundle\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.631097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn5fm\" (UniqueName: \"kubernetes.io/projected/027edf40-c863-46a8-8950-322f56db87d3-kube-api-access-cn5fm\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.631166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-config-data\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.631192 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027edf40-c863-46a8-8950-322f56db87d3-logs\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.647197 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.651719 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.655965 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.680836 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.681021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-849mw" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.681856 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.693764 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.701229 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"3ab9b65034e65424d7acd5d72ab0405832ddd29afadf23a29b8f1000c8df88a0"} Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.701298 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"f521d585420485189c63e162b72f12e1c569c7505e4ea58851ee6662f06e643b"} Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.746336 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.746385 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-run-httpd\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.753918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-scripts\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.754326 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.754485 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-config-data\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.755289 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-scripts\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.755442 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-combined-ca-bundle\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.755484 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn5fm\" (UniqueName: \"kubernetes.io/projected/027edf40-c863-46a8-8950-322f56db87d3-kube-api-access-cn5fm\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.756065 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v997w\" (UniqueName: \"kubernetes.io/projected/eff0282f-3775-4904-8937-c9e16749e3e8-kube-api-access-v997w\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.756234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-config-data\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.756552 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027edf40-c863-46a8-8950-322f56db87d3-logs\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.760028 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-log-httpd\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.766911 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-scripts\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.770107 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027edf40-c863-46a8-8950-322f56db87d3-logs\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.776316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-config-data\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.777476 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-combined-ca-bundle\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.788921 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.819187 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn5fm\" (UniqueName: \"kubernetes.io/projected/027edf40-c863-46a8-8950-322f56db87d3-kube-api-access-cn5fm\") pod \"placement-db-sync-2bmkk\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.857096 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6917425b-2a6f-47ff-9484-b2d04b9de99f" path="/var/lib/kubelet/pods/6917425b-2a6f-47ff-9484-b2d04b9de99f/volumes" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.862733 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-log-httpd\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.862963 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.863081 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-run-httpd\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.863120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-scripts\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.863408 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.863431 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-config-data\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.863951 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v997w\" (UniqueName: \"kubernetes.io/projected/eff0282f-3775-4904-8937-c9e16749e3e8-kube-api-access-v997w\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.864183 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-run-httpd\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.865021 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-log-httpd\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.869444 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-scripts\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.869691 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.871143 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-config-data\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.880773 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.897611 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v997w\" (UniqueName: \"kubernetes.io/projected/eff0282f-3775-4904-8937-c9e16749e3e8-kube-api-access-v997w\") pod \"ceilometer-0\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " pod="openstack/ceilometer-0" Dec 09 12:28:03 crc kubenswrapper[4970]: I1209 12:28:03.911699 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9txbz"] Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.122844 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.144946 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.274378 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-kg6zl"] Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.308464 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wqsvd"] Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.535441 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cltzd"] Dec 09 12:28:04 crc kubenswrapper[4970]: W1209 12:28:04.547840 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod222db933_1bf5_4df0_aa84_362453b9ba35.slice/crio-e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46 WatchSource:0}: Error finding container e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46: Status 404 returned error can't find the container with id e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46 Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.700060 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-849mw"] Dec 09 12:28:04 crc kubenswrapper[4970]: W1209 12:28:04.729798 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43ba192b_df87_4028_a33e_4ff96d287644.slice/crio-293d5b11c323d46e73ace4bcb093adbbadd2e987bb6e601b84bdc4cc0abc9cdb WatchSource:0}: Error finding container 293d5b11c323d46e73ace4bcb093adbbadd2e987bb6e601b84bdc4cc0abc9cdb: Status 404 returned error can't find the container with id 293d5b11c323d46e73ace4bcb093adbbadd2e987bb6e601b84bdc4cc0abc9cdb Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.742135 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" event={"ID":"b3aff5ba-914e-41db-af59-7568f0e5ff5e","Type":"ContainerStarted","Data":"b58a6a042e156a1a9716aceeacfedfd12e97e96a6b1ab23ae32dc0d364473539"} Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.797856 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"2e18bbf48fb978d14ea5fa8084eae9a08e6046c22b17dec0e48b7fad8f6767e2"} Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.810154 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cltzd" event={"ID":"222db933-1bf5-4df0-aa84-362453b9ba35","Type":"ContainerStarted","Data":"e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46"} Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.816778 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kg6zl" event={"ID":"3015f85e-5d86-4906-9d9a-8389330bcb82","Type":"ContainerStarted","Data":"527cf2d18f1dc29c2b421dd8e6bb41a0b4464ee71cd0243fab919e7c39452812"} Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.819832 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqsvd" event={"ID":"6b557cf4-3787-453c-a1f5-813bb5c50a04","Type":"ContainerStarted","Data":"b00a082d468a0aa07f2cc2754a871840254d49090c79577a4697c35ceeba706f"} Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.870641 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-kz24r"] Dec 09 12:28:04 crc kubenswrapper[4970]: I1209 12:28:04.969032 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-rxs6p"] Dec 09 12:28:05 crc kubenswrapper[4970]: W1209 12:28:05.039205 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod973c0567_09c0_4313_8c9f_ee74a3188226.slice/crio-76cfc6b6de451104fa4881543e05f4e18f2a7d116bb09a4f2f53dfa30a4ca80d WatchSource:0}: Error finding container 76cfc6b6de451104fa4881543e05f4e18f2a7d116bb09a4f2f53dfa30a4ca80d: Status 404 returned error can't find the container with id 76cfc6b6de451104fa4881543e05f4e18f2a7d116bb09a4f2f53dfa30a4ca80d Dec 09 12:28:05 crc kubenswrapper[4970]: W1209 12:28:05.152392 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod027edf40_c863_46a8_8950_322f56db87d3.slice/crio-14bedf93f7d398c6608b9f6cf8fee4fa34ac5ed4733b2a9efae9bf754dc57911 WatchSource:0}: Error finding container 14bedf93f7d398c6608b9f6cf8fee4fa34ac5ed4733b2a9efae9bf754dc57911: Status 404 returned error can't find the container with id 14bedf93f7d398c6608b9f6cf8fee4fa34ac5ed4733b2a9efae9bf754dc57911 Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.160579 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2bmkk"] Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.504368 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.704609 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.857537 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerStarted","Data":"9c5eacfba49bb35ae3bc5c7ff52f795d0eeaea30c039f0d1dce2a6974fcbdd2e"} Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.881489 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rxs6p" event={"ID":"973c0567-09c0-4313-8c9f-ee74a3188226","Type":"ContainerStarted","Data":"76cfc6b6de451104fa4881543e05f4e18f2a7d116bb09a4f2f53dfa30a4ca80d"} Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.889571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" event={"ID":"47def595-51d1-4f88-9536-cf41061aae5e","Type":"ContainerStarted","Data":"e2565474ebf8e3feab828577b8976a78cc1b4751a54cdb236a1f3eb142886221"} Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.914622 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-849mw" event={"ID":"43ba192b-df87-4028-a33e-4ff96d287644","Type":"ContainerStarted","Data":"293d5b11c323d46e73ace4bcb093adbbadd2e987bb6e601b84bdc4cc0abc9cdb"} Dec 09 12:28:05 crc kubenswrapper[4970]: I1209 12:28:05.923319 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2bmkk" event={"ID":"027edf40-c863-46a8-8950-322f56db87d3","Type":"ContainerStarted","Data":"14bedf93f7d398c6608b9f6cf8fee4fa34ac5ed4733b2a9efae9bf754dc57911"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.949329 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"d913b0aa50f7f008fec642bed7ddd3ff880761acc5d395ad8f410ac23f91b423"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.949910 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"0aa52c26c1b5a7a479cc49daabcde52694776519c83de1f051642829e5238acb"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.949929 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"b03ef681096f86c0c13565e5c19c031a468f2c9f3afb89d1f5709a11ba2d6336"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.953611 4970 generic.go:334] "Generic (PLEG): container finished" podID="47def595-51d1-4f88-9536-cf41061aae5e" containerID="4eb2a31f2ab264d11c7652c24fb0e26b8f11f3c5840046934ca5cc691649193a" exitCode=0 Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.953669 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" event={"ID":"47def595-51d1-4f88-9536-cf41061aae5e","Type":"ContainerDied","Data":"4eb2a31f2ab264d11c7652c24fb0e26b8f11f3c5840046934ca5cc691649193a"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.960278 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cltzd" event={"ID":"222db933-1bf5-4df0-aa84-362453b9ba35","Type":"ContainerStarted","Data":"9c1cf379934d31e88480a5ea3214360b781f8a8077693ee79346b6b851871483"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.969789 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqsvd" event={"ID":"6b557cf4-3787-453c-a1f5-813bb5c50a04","Type":"ContainerStarted","Data":"c026a0b15e75aaa504d98014131fedcc52b716b97ecc6a3a92e86fb741eed643"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.976955 4970 generic.go:334] "Generic (PLEG): container finished" podID="3e993a7f-0aee-41c5-adb3-a3becd49066f" containerID="ec0b1ac6d87dc8b7048bbb5706bc40af14a490af4384ad21eeca8bc4a791dcd8" exitCode=0 Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.977076 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e993a7f-0aee-41c5-adb3-a3becd49066f","Type":"ContainerDied","Data":"ec0b1ac6d87dc8b7048bbb5706bc40af14a490af4384ad21eeca8bc4a791dcd8"} Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.996653 4970 generic.go:334] "Generic (PLEG): container finished" podID="b3aff5ba-914e-41db-af59-7568f0e5ff5e" containerID="2bd3bf56e46e8f06f46621cb01d87f18f737a35abae7b03056aaf3ab7d58aff4" exitCode=0 Dec 09 12:28:06 crc kubenswrapper[4970]: I1209 12:28:06.996703 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" event={"ID":"b3aff5ba-914e-41db-af59-7568f0e5ff5e","Type":"ContainerDied","Data":"2bd3bf56e46e8f06f46621cb01d87f18f737a35abae7b03056aaf3ab7d58aff4"} Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.026996 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wqsvd" podStartSLOduration=5.026939286 podStartE2EDuration="5.026939286s" podCreationTimestamp="2025-12-09 12:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:07.003146752 +0000 UTC m=+1299.563627803" watchObservedRunningTime="2025-12-09 12:28:07.026939286 +0000 UTC m=+1299.587420357" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.056134 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cltzd" podStartSLOduration=5.056108586 podStartE2EDuration="5.056108586s" podCreationTimestamp="2025-12-09 12:28:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:07.024426018 +0000 UTC m=+1299.584907069" watchObservedRunningTime="2025-12-09 12:28:07.056108586 +0000 UTC m=+1299.616589637" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.392765 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.541633 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4tm\" (UniqueName: \"kubernetes.io/projected/b3aff5ba-914e-41db-af59-7568f0e5ff5e-kube-api-access-sg4tm\") pod \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.541753 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-dns-svc\") pod \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.541783 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-sb\") pod \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.541821 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-nb\") pod \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.542034 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-config\") pod \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\" (UID: \"b3aff5ba-914e-41db-af59-7568f0e5ff5e\") " Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.546093 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3aff5ba-914e-41db-af59-7568f0e5ff5e-kube-api-access-sg4tm" (OuterVolumeSpecName: "kube-api-access-sg4tm") pod "b3aff5ba-914e-41db-af59-7568f0e5ff5e" (UID: "b3aff5ba-914e-41db-af59-7568f0e5ff5e"). InnerVolumeSpecName "kube-api-access-sg4tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.568888 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3aff5ba-914e-41db-af59-7568f0e5ff5e" (UID: "b3aff5ba-914e-41db-af59-7568f0e5ff5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.574963 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3aff5ba-914e-41db-af59-7568f0e5ff5e" (UID: "b3aff5ba-914e-41db-af59-7568f0e5ff5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.584053 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-config" (OuterVolumeSpecName: "config") pod "b3aff5ba-914e-41db-af59-7568f0e5ff5e" (UID: "b3aff5ba-914e-41db-af59-7568f0e5ff5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.585134 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3aff5ba-914e-41db-af59-7568f0e5ff5e" (UID: "b3aff5ba-914e-41db-af59-7568f0e5ff5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.645402 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.645443 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg4tm\" (UniqueName: \"kubernetes.io/projected/b3aff5ba-914e-41db-af59-7568f0e5ff5e-kube-api-access-sg4tm\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.645457 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.645469 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:07 crc kubenswrapper[4970]: I1209 12:28:07.645481 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3aff5ba-914e-41db-af59-7568f0e5ff5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.044156 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e993a7f-0aee-41c5-adb3-a3becd49066f","Type":"ContainerStarted","Data":"d58ec3079e28e9885c2db163d1f73aa8fc8684121c388dcc949f5db8549dc789"} Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.046305 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" event={"ID":"b3aff5ba-914e-41db-af59-7568f0e5ff5e","Type":"ContainerDied","Data":"b58a6a042e156a1a9716aceeacfedfd12e97e96a6b1ab23ae32dc0d364473539"} Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.046354 4970 scope.go:117] "RemoveContainer" containerID="2bd3bf56e46e8f06f46621cb01d87f18f737a35abae7b03056aaf3ab7d58aff4" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.046572 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-9txbz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.099141 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"bee29a58-7867-4543-bb4e-c19528625b1a","Type":"ContainerStarted","Data":"43bf1b265367275061f520f65b882297b568004854ee01f5be4be0574e03c5b3"} Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.108352 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9txbz"] Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.115972 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" event={"ID":"47def595-51d1-4f88-9536-cf41061aae5e","Type":"ContainerStarted","Data":"b3eeb98bab2378d84f23419dec13d8349f865c9033ba32c07d75dc9e81a86a54"} Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.116014 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.122331 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-9txbz"] Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.157800 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.779936811 podStartE2EDuration="50.157762179s" podCreationTimestamp="2025-12-09 12:27:18 +0000 UTC" firstStartedPulling="2025-12-09 12:27:56.215422411 +0000 UTC m=+1288.775903462" lastFinishedPulling="2025-12-09 12:28:02.593247779 +0000 UTC m=+1295.153728830" observedRunningTime="2025-12-09 12:28:08.147702366 +0000 UTC m=+1300.708183417" watchObservedRunningTime="2025-12-09 12:28:08.157762179 +0000 UTC m=+1300.718243230" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.181175 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podStartSLOduration=5.181146802 podStartE2EDuration="5.181146802s" podCreationTimestamp="2025-12-09 12:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:08.167232455 +0000 UTC m=+1300.727713516" watchObservedRunningTime="2025-12-09 12:28:08.181146802 +0000 UTC m=+1300.741627853" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.459888 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-kz24r"] Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.490499 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-w27wz"] Dec 09 12:28:08 crc kubenswrapper[4970]: E1209 12:28:08.491040 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3aff5ba-914e-41db-af59-7568f0e5ff5e" containerName="init" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.491060 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3aff5ba-914e-41db-af59-7568f0e5ff5e" containerName="init" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.491362 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3aff5ba-914e-41db-af59-7568f0e5ff5e" containerName="init" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.497580 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.501079 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.503381 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-w27wz"] Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.569841 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.569978 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-config\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.570045 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.570127 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.570158 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.570216 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8krxb\" (UniqueName: \"kubernetes.io/projected/dabcded3-fd37-4349-b3dc-98b143771dca-kube-api-access-8krxb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.673764 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8krxb\" (UniqueName: \"kubernetes.io/projected/dabcded3-fd37-4349-b3dc-98b143771dca-kube-api-access-8krxb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.673811 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.673883 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-config\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.675088 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.675180 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-config\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.675235 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.675348 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.675380 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.676004 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.676847 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.680372 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.695705 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8krxb\" (UniqueName: \"kubernetes.io/projected/dabcded3-fd37-4349-b3dc-98b143771dca-kube-api-access-8krxb\") pod \"dnsmasq-dns-58dd9ff6bc-w27wz\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:08 crc kubenswrapper[4970]: I1209 12:28:08.843101 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:09 crc kubenswrapper[4970]: I1209 12:28:09.830532 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3aff5ba-914e-41db-af59-7568f0e5ff5e" path="/var/lib/kubelet/pods/b3aff5ba-914e-41db-af59-7568f0e5ff5e/volumes" Dec 09 12:28:10 crc kubenswrapper[4970]: I1209 12:28:10.153745 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" containerID="cri-o://b3eeb98bab2378d84f23419dec13d8349f865c9033ba32c07d75dc9e81a86a54" gracePeriod=10 Dec 09 12:28:10 crc kubenswrapper[4970]: I1209 12:28:10.154008 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qvmmx" event={"ID":"de81c9d6-10bd-46d2-ab77-74463359dc5a","Type":"ContainerStarted","Data":"26d3fc06fd718f9f3455f3d045270ec24bfc9e7d2df0d3b08e80c1756fd605d8"} Dec 09 12:28:10 crc kubenswrapper[4970]: I1209 12:28:10.180006 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-qvmmx" podStartSLOduration=3.865133281 podStartE2EDuration="39.179929628s" podCreationTimestamp="2025-12-09 12:27:31 +0000 UTC" firstStartedPulling="2025-12-09 12:27:33.06822533 +0000 UTC m=+1265.628706381" lastFinishedPulling="2025-12-09 12:28:08.383021677 +0000 UTC m=+1300.943502728" observedRunningTime="2025-12-09 12:28:10.1671812 +0000 UTC m=+1302.727662251" watchObservedRunningTime="2025-12-09 12:28:10.179929628 +0000 UTC m=+1302.740410679" Dec 09 12:28:11 crc kubenswrapper[4970]: I1209 12:28:11.166587 4970 generic.go:334] "Generic (PLEG): container finished" podID="47def595-51d1-4f88-9536-cf41061aae5e" containerID="b3eeb98bab2378d84f23419dec13d8349f865c9033ba32c07d75dc9e81a86a54" exitCode=0 Dec 09 12:28:11 crc kubenswrapper[4970]: I1209 12:28:11.166644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" event={"ID":"47def595-51d1-4f88-9536-cf41061aae5e","Type":"ContainerDied","Data":"b3eeb98bab2378d84f23419dec13d8349f865c9033ba32c07d75dc9e81a86a54"} Dec 09 12:28:13 crc kubenswrapper[4970]: I1209 12:28:13.189218 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e993a7f-0aee-41c5-adb3-a3becd49066f","Type":"ContainerStarted","Data":"38c567f3fbd697100978a5c3b1985863b0ad1732bb5ce588515b8ab8b3a81b11"} Dec 09 12:28:13 crc kubenswrapper[4970]: I1209 12:28:13.676659 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Dec 09 12:28:14 crc kubenswrapper[4970]: I1209 12:28:14.204625 4970 generic.go:334] "Generic (PLEG): container finished" podID="6b557cf4-3787-453c-a1f5-813bb5c50a04" containerID="c026a0b15e75aaa504d98014131fedcc52b716b97ecc6a3a92e86fb741eed643" exitCode=0 Dec 09 12:28:14 crc kubenswrapper[4970]: I1209 12:28:14.204695 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqsvd" event={"ID":"6b557cf4-3787-453c-a1f5-813bb5c50a04","Type":"ContainerDied","Data":"c026a0b15e75aaa504d98014131fedcc52b716b97ecc6a3a92e86fb741eed643"} Dec 09 12:28:16 crc kubenswrapper[4970]: I1209 12:28:16.010855 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:28:16 crc kubenswrapper[4970]: I1209 12:28:16.011169 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:28:18 crc kubenswrapper[4970]: I1209 12:28:18.663655 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: connect: connection refused" Dec 09 12:28:18 crc kubenswrapper[4970]: E1209 12:28:18.902230 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 09 12:28:18 crc kubenswrapper[4970]: E1209 12:28:18.902457 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cn5fm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-2bmkk_openstack(027edf40-c863-46a8-8950-322f56db87d3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:18 crc kubenswrapper[4970]: E1209 12:28:18.903698 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-2bmkk" podUID="027edf40-c863-46a8-8950-322f56db87d3" Dec 09 12:28:19 crc kubenswrapper[4970]: E1209 12:28:19.256149 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-2bmkk" podUID="027edf40-c863-46a8-8950-322f56db87d3" Dec 09 12:28:26 crc kubenswrapper[4970]: I1209 12:28:26.337487 4970 generic.go:334] "Generic (PLEG): container finished" podID="de81c9d6-10bd-46d2-ab77-74463359dc5a" containerID="26d3fc06fd718f9f3455f3d045270ec24bfc9e7d2df0d3b08e80c1756fd605d8" exitCode=0 Dec 09 12:28:26 crc kubenswrapper[4970]: I1209 12:28:26.337577 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qvmmx" event={"ID":"de81c9d6-10bd-46d2-ab77-74463359dc5a","Type":"ContainerDied","Data":"26d3fc06fd718f9f3455f3d045270ec24bfc9e7d2df0d3b08e80c1756fd605d8"} Dec 09 12:28:28 crc kubenswrapper[4970]: I1209 12:28:28.662685 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.666852 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.770315 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:33 crc kubenswrapper[4970]: E1209 12:28:33.794110 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 09 12:28:33 crc kubenswrapper[4970]: E1209 12:28:33.794287 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bwtdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-849mw_openstack(43ba192b-df87-4028-a33e-4ff96d287644): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:33 crc kubenswrapper[4970]: E1209 12:28:33.796336 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-849mw" podUID="43ba192b-df87-4028-a33e-4ff96d287644" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.876303 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-credential-keys\") pod \"6b557cf4-3787-453c-a1f5-813bb5c50a04\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.876479 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-combined-ca-bundle\") pod \"6b557cf4-3787-453c-a1f5-813bb5c50a04\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.876517 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-config-data\") pod \"6b557cf4-3787-453c-a1f5-813bb5c50a04\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.876662 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-fernet-keys\") pod \"6b557cf4-3787-453c-a1f5-813bb5c50a04\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.876702 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-scripts\") pod \"6b557cf4-3787-453c-a1f5-813bb5c50a04\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.876745 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sxtn\" (UniqueName: \"kubernetes.io/projected/6b557cf4-3787-453c-a1f5-813bb5c50a04-kube-api-access-8sxtn\") pod \"6b557cf4-3787-453c-a1f5-813bb5c50a04\" (UID: \"6b557cf4-3787-453c-a1f5-813bb5c50a04\") " Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.882270 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6b557cf4-3787-453c-a1f5-813bb5c50a04" (UID: "6b557cf4-3787-453c-a1f5-813bb5c50a04"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.883968 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-scripts" (OuterVolumeSpecName: "scripts") pod "6b557cf4-3787-453c-a1f5-813bb5c50a04" (UID: "6b557cf4-3787-453c-a1f5-813bb5c50a04"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.884522 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b557cf4-3787-453c-a1f5-813bb5c50a04-kube-api-access-8sxtn" (OuterVolumeSpecName: "kube-api-access-8sxtn") pod "6b557cf4-3787-453c-a1f5-813bb5c50a04" (UID: "6b557cf4-3787-453c-a1f5-813bb5c50a04"). InnerVolumeSpecName "kube-api-access-8sxtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.893032 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6b557cf4-3787-453c-a1f5-813bb5c50a04" (UID: "6b557cf4-3787-453c-a1f5-813bb5c50a04"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.905681 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-config-data" (OuterVolumeSpecName: "config-data") pod "6b557cf4-3787-453c-a1f5-813bb5c50a04" (UID: "6b557cf4-3787-453c-a1f5-813bb5c50a04"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.927903 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b557cf4-3787-453c-a1f5-813bb5c50a04" (UID: "6b557cf4-3787-453c-a1f5-813bb5c50a04"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.979370 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.979407 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sxtn\" (UniqueName: \"kubernetes.io/projected/6b557cf4-3787-453c-a1f5-813bb5c50a04-kube-api-access-8sxtn\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.979421 4970 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.979432 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.979444 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:33 crc kubenswrapper[4970]: I1209 12:28:33.979455 4970 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6b557cf4-3787-453c-a1f5-813bb5c50a04-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.017638 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.017823 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xk7r4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-kg6zl_openstack(3015f85e-5d86-4906-9d9a-8389330bcb82): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.019186 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-kg6zl" podUID="3015f85e-5d86-4906-9d9a-8389330bcb82" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.064884 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.073880 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qvmmx" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.086856 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-combined-ca-bundle\") pod \"de81c9d6-10bd-46d2-ab77-74463359dc5a\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087055 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-config\") pod \"47def595-51d1-4f88-9536-cf41061aae5e\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087085 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-nb\") pod \"47def595-51d1-4f88-9536-cf41061aae5e\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087107 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwkw\" (UniqueName: \"kubernetes.io/projected/de81c9d6-10bd-46d2-ab77-74463359dc5a-kube-api-access-6zwkw\") pod \"de81c9d6-10bd-46d2-ab77-74463359dc5a\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087140 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-db-sync-config-data\") pod \"de81c9d6-10bd-46d2-ab77-74463359dc5a\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087169 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-sb\") pod \"47def595-51d1-4f88-9536-cf41061aae5e\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087211 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-config-data\") pod \"de81c9d6-10bd-46d2-ab77-74463359dc5a\" (UID: \"de81c9d6-10bd-46d2-ab77-74463359dc5a\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087238 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngxc\" (UniqueName: \"kubernetes.io/projected/47def595-51d1-4f88-9536-cf41061aae5e-kube-api-access-dngxc\") pod \"47def595-51d1-4f88-9536-cf41061aae5e\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.087280 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-dns-svc\") pod \"47def595-51d1-4f88-9536-cf41061aae5e\" (UID: \"47def595-51d1-4f88-9536-cf41061aae5e\") " Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.092425 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47def595-51d1-4f88-9536-cf41061aae5e-kube-api-access-dngxc" (OuterVolumeSpecName: "kube-api-access-dngxc") pod "47def595-51d1-4f88-9536-cf41061aae5e" (UID: "47def595-51d1-4f88-9536-cf41061aae5e"). InnerVolumeSpecName "kube-api-access-dngxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.104632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "de81c9d6-10bd-46d2-ab77-74463359dc5a" (UID: "de81c9d6-10bd-46d2-ab77-74463359dc5a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.130871 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de81c9d6-10bd-46d2-ab77-74463359dc5a-kube-api-access-6zwkw" (OuterVolumeSpecName: "kube-api-access-6zwkw") pod "de81c9d6-10bd-46d2-ab77-74463359dc5a" (UID: "de81c9d6-10bd-46d2-ab77-74463359dc5a"). InnerVolumeSpecName "kube-api-access-6zwkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.140936 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "47def595-51d1-4f88-9536-cf41061aae5e" (UID: "47def595-51d1-4f88-9536-cf41061aae5e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.151185 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de81c9d6-10bd-46d2-ab77-74463359dc5a" (UID: "de81c9d6-10bd-46d2-ab77-74463359dc5a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.156866 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-config" (OuterVolumeSpecName: "config") pod "47def595-51d1-4f88-9536-cf41061aae5e" (UID: "47def595-51d1-4f88-9536-cf41061aae5e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.166367 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "47def595-51d1-4f88-9536-cf41061aae5e" (UID: "47def595-51d1-4f88-9536-cf41061aae5e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.171675 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-config-data" (OuterVolumeSpecName: "config-data") pod "de81c9d6-10bd-46d2-ab77-74463359dc5a" (UID: "de81c9d6-10bd-46d2-ab77-74463359dc5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.178062 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "47def595-51d1-4f88-9536-cf41061aae5e" (UID: "47def595-51d1-4f88-9536-cf41061aae5e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189073 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189098 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189107 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189116 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zwkw\" (UniqueName: \"kubernetes.io/projected/de81c9d6-10bd-46d2-ab77-74463359dc5a-kube-api-access-6zwkw\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189126 4970 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189134 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189142 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de81c9d6-10bd-46d2-ab77-74463359dc5a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189151 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngxc\" (UniqueName: \"kubernetes.io/projected/47def595-51d1-4f88-9536-cf41061aae5e-kube-api-access-dngxc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.189160 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/47def595-51d1-4f88-9536-cf41061aae5e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.413208 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" event={"ID":"47def595-51d1-4f88-9536-cf41061aae5e","Type":"ContainerDied","Data":"e2565474ebf8e3feab828577b8976a78cc1b4751a54cdb236a1f3eb142886221"} Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.413764 4970 scope.go:117] "RemoveContainer" containerID="b3eeb98bab2378d84f23419dec13d8349f865c9033ba32c07d75dc9e81a86a54" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.413821 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.417337 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wqsvd" event={"ID":"6b557cf4-3787-453c-a1f5-813bb5c50a04","Type":"ContainerDied","Data":"b00a082d468a0aa07f2cc2754a871840254d49090c79577a4697c35ceeba706f"} Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.417410 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b00a082d468a0aa07f2cc2754a871840254d49090c79577a4697c35ceeba706f" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.417464 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wqsvd" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.422097 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-qvmmx" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.422981 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-qvmmx" event={"ID":"de81c9d6-10bd-46d2-ab77-74463359dc5a","Type":"ContainerDied","Data":"0af3ad122578b7c273bbf9e59d42a2c31ee9e18beec5c76cbb597580b023388d"} Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.423023 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0af3ad122578b7c273bbf9e59d42a2c31ee9e18beec5c76cbb597580b023388d" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.424406 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-kg6zl" podUID="3015f85e-5d86-4906-9d9a-8389330bcb82" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.425492 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-849mw" podUID="43ba192b-df87-4028-a33e-4ff96d287644" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.491144 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-kz24r"] Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.502222 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-kz24r"] Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.573158 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.573352 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bsh4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-rxs6p_openstack(973c0567-09c0-4313-8c9f-ee74a3188226): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.575366 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-rxs6p" podUID="973c0567-09c0-4313-8c9f-ee74a3188226" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.629783 4970 scope.go:117] "RemoveContainer" containerID="4eb2a31f2ab264d11c7652c24fb0e26b8f11f3c5840046934ca5cc691649193a" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.874457 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wqsvd"] Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.889816 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wqsvd"] Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.976198 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4kwdc"] Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.976790 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.976863 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.976948 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b557cf4-3787-453c-a1f5-813bb5c50a04" containerName="keystone-bootstrap" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.976998 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b557cf4-3787-453c-a1f5-813bb5c50a04" containerName="keystone-bootstrap" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.977059 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de81c9d6-10bd-46d2-ab77-74463359dc5a" containerName="glance-db-sync" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.977730 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="de81c9d6-10bd-46d2-ab77-74463359dc5a" containerName="glance-db-sync" Dec 09 12:28:34 crc kubenswrapper[4970]: E1209 12:28:34.977823 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="init" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.977875 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="init" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.978195 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.978286 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b557cf4-3787-453c-a1f5-813bb5c50a04" containerName="keystone-bootstrap" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.978382 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="de81c9d6-10bd-46d2-ab77-74463359dc5a" containerName="glance-db-sync" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.982000 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.987426 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4kwdc"] Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.989054 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.989176 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.989237 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.991198 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:28:34 crc kubenswrapper[4970]: I1209 12:28:34.991654 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m6jjm" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.010330 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-credential-keys\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.010409 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-scripts\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.010537 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmmz\" (UniqueName: \"kubernetes.io/projected/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-kube-api-access-rfmmz\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.010686 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-config-data\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.010730 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-combined-ca-bundle\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.010792 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-fernet-keys\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.113984 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-combined-ca-bundle\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.114067 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-fernet-keys\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.114201 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-credential-keys\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.114308 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-scripts\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.114382 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfmmz\" (UniqueName: \"kubernetes.io/projected/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-kube-api-access-rfmmz\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.114476 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-config-data\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.120646 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-combined-ca-bundle\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.121817 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-credential-keys\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.121866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-config-data\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.123892 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-scripts\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.125596 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-fernet-keys\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.137608 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-w27wz"] Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.143793 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfmmz\" (UniqueName: \"kubernetes.io/projected/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-kube-api-access-rfmmz\") pod \"keystone-bootstrap-4kwdc\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: W1209 12:28:35.159608 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddabcded3_fd37_4349_b3dc_98b143771dca.slice/crio-40b39b9ab605363c78ef00a56e235b86be6e551a6802b4ca4e4fe8f9a3a48988 WatchSource:0}: Error finding container 40b39b9ab605363c78ef00a56e235b86be6e551a6802b4ca4e4fe8f9a3a48988: Status 404 returned error can't find the container with id 40b39b9ab605363c78ef00a56e235b86be6e551a6802b4ca4e4fe8f9a3a48988 Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.312366 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.502533 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3e993a7f-0aee-41c5-adb3-a3becd49066f","Type":"ContainerStarted","Data":"c1a5d55e29026233e9f5bdc39159c5efb60905f63739c782479ed47cf2a85488"} Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.539781 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" event={"ID":"dabcded3-fd37-4349-b3dc-98b143771dca","Type":"ContainerStarted","Data":"40b39b9ab605363c78ef00a56e235b86be6e551a6802b4ca4e4fe8f9a3a48988"} Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.567895 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerStarted","Data":"154d1d716ce53b88f710b17a406709b049c15599df7a692dd117a408b3606212"} Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.616091 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=39.616062989 podStartE2EDuration="39.616062989s" podCreationTimestamp="2025-12-09 12:27:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:35.569130854 +0000 UTC m=+1328.129611915" watchObservedRunningTime="2025-12-09 12:28:35.616062989 +0000 UTC m=+1328.176544060" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.632643 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2bmkk" event={"ID":"027edf40-c863-46a8-8950-322f56db87d3","Type":"ContainerStarted","Data":"b71341b506c6fde3668d225099185895891e9697836992f67b55dd78d5cbf49b"} Dec 09 12:28:35 crc kubenswrapper[4970]: E1209 12:28:35.682063 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-rxs6p" podUID="973c0567-09c0-4313-8c9f-ee74a3188226" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.775757 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2bmkk" podStartSLOduration=3.23461743 podStartE2EDuration="32.775734854s" podCreationTimestamp="2025-12-09 12:28:03 +0000 UTC" firstStartedPulling="2025-12-09 12:28:05.161921587 +0000 UTC m=+1297.722402638" lastFinishedPulling="2025-12-09 12:28:34.703039011 +0000 UTC m=+1327.263520062" observedRunningTime="2025-12-09 12:28:35.683809906 +0000 UTC m=+1328.244290957" watchObservedRunningTime="2025-12-09 12:28:35.775734854 +0000 UTC m=+1328.336215905" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.848728 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47def595-51d1-4f88-9536-cf41061aae5e" path="/var/lib/kubelet/pods/47def595-51d1-4f88-9536-cf41061aae5e/volumes" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.855373 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b557cf4-3787-453c-a1f5-813bb5c50a04" path="/var/lib/kubelet/pods/6b557cf4-3787-453c-a1f5-813bb5c50a04/volumes" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.873310 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-w27wz"] Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.939376 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzhss"] Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.941733 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:35 crc kubenswrapper[4970]: I1209 12:28:35.999929 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzhss"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.103559 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.103661 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.103687 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.103775 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcxpj\" (UniqueName: \"kubernetes.io/projected/0b1f388d-9583-482f-9089-714ba98cafff-kube-api-access-vcxpj\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.103808 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-config\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.103909 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.205745 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.205834 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.205858 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.205947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcxpj\" (UniqueName: \"kubernetes.io/projected/0b1f388d-9583-482f-9089-714ba98cafff-kube-api-access-vcxpj\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.206000 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-config\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.206099 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.207062 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.207132 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.207751 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.208114 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-config\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.208191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.236118 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcxpj\" (UniqueName: \"kubernetes.io/projected/0b1f388d-9583-482f-9089-714ba98cafff-kube-api-access-vcxpj\") pod \"dnsmasq-dns-785d8bcb8c-gzhss\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.256957 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4kwdc"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.269717 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.665359 4970 generic.go:334] "Generic (PLEG): container finished" podID="dabcded3-fd37-4349-b3dc-98b143771dca" containerID="acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b" exitCode=0 Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.666037 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" event={"ID":"dabcded3-fd37-4349-b3dc-98b143771dca","Type":"ContainerDied","Data":"acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b"} Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.672291 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kwdc" event={"ID":"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4","Type":"ContainerStarted","Data":"20001436ea007dee95f34238e1d1cdaa96eef1b75c0520dd4a1fb3b08b17a534"} Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.672364 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kwdc" event={"ID":"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4","Type":"ContainerStarted","Data":"30446614af73be2a40d563d1e655912d4ca82e50311ac75dcfe482fadf05036c"} Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.733836 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.736004 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.737728 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.738447 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-22qbn" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.740374 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.743074 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4kwdc" podStartSLOduration=2.74305925 podStartE2EDuration="2.74305925s" podCreationTimestamp="2025-12-09 12:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:36.705548255 +0000 UTC m=+1329.266029316" watchObservedRunningTime="2025-12-09 12:28:36.74305925 +0000 UTC m=+1329.303540301" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.771729 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.823852 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzhss"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.859513 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.862836 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.867191 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.885015 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926734 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-scripts\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926814 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-logs\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926852 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926901 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klf7n\" (UniqueName: \"kubernetes.io/projected/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-kube-api-access-klf7n\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926922 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926945 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.926977 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-config-data\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:36 crc kubenswrapper[4970]: I1209 12:28:36.938603 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029074 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-scripts\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029168 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-logs\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029212 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029239 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029297 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-logs\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029322 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klf7n\" (UniqueName: \"kubernetes.io/projected/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-kube-api-access-klf7n\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029342 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029366 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029388 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcsmf\" (UniqueName: \"kubernetes.io/projected/e179577e-4bb5-4486-9c3f-536899514d12-kube-api-access-fcsmf\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029432 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029460 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-config-data\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029492 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.029549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.033611 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.034662 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-logs\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.038028 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.042234 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-config-data\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.042844 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.043475 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-scripts\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.058686 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klf7n\" (UniqueName: \"kubernetes.io/projected/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-kube-api-access-klf7n\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.081035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.136741 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.136945 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.136980 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-logs\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.137038 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcsmf\" (UniqueName: \"kubernetes.io/projected/e179577e-4bb5-4486-9c3f-536899514d12-kube-api-access-fcsmf\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.137067 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.137089 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.137138 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.138894 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-logs\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.138938 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.139344 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.141903 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.142810 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.144105 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.157220 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcsmf\" (UniqueName: \"kubernetes.io/projected/e179577e-4bb5-4486-9c3f-536899514d12-kube-api-access-fcsmf\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.181625 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.200074 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: W1209 12:28:37.271519 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b1f388d_9583_482f_9089_714ba98cafff.slice/crio-9a583ad132173070ceb4f17acb01432700559f75117b82ec07ee2b753ab12fac WatchSource:0}: Error finding container 9a583ad132173070ceb4f17acb01432700559f75117b82ec07ee2b753ab12fac: Status 404 returned error can't find the container with id 9a583ad132173070ceb4f17acb01432700559f75117b82ec07ee2b753ab12fac Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.363336 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.694359 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerStarted","Data":"648ba075c3f1667b000af1154a5dbb76a01a623bf4fd6dbdbb88289b5395a6b3"} Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.697692 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" event={"ID":"0b1f388d-9583-482f-9089-714ba98cafff","Type":"ContainerStarted","Data":"9a583ad132173070ceb4f17acb01432700559f75117b82ec07ee2b753ab12fac"} Dec 09 12:28:37 crc kubenswrapper[4970]: I1209 12:28:37.958863 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:37 crc kubenswrapper[4970]: W1209 12:28:37.961182 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode179577e_4bb5_4486_9c3f_536899514d12.slice/crio-64fac7de2b140ccf6e824238109f842d756b5d6abc1af865f0030a42e1e12385 WatchSource:0}: Error finding container 64fac7de2b140ccf6e824238109f842d756b5d6abc1af865f0030a42e1e12385: Status 404 returned error can't find the container with id 64fac7de2b140ccf6e824238109f842d756b5d6abc1af865f0030a42e1e12385 Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.152307 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:38 crc kubenswrapper[4970]: W1209 12:28:38.193693 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80bba95d_00c0_48b9_aac2_c6fbd912ca8c.slice/crio-10b87e9954813ba006dafc7d946167587c53c4de5a50dbcfbe6058013703f32d WatchSource:0}: Error finding container 10b87e9954813ba006dafc7d946167587c53c4de5a50dbcfbe6058013703f32d: Status 404 returned error can't find the container with id 10b87e9954813ba006dafc7d946167587c53c4de5a50dbcfbe6058013703f32d Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.374274 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.422465 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.667671 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-kz24r" podUID="47def595-51d1-4f88-9536-cf41061aae5e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.175:5353: i/o timeout" Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.719042 4970 generic.go:334] "Generic (PLEG): container finished" podID="027edf40-c863-46a8-8950-322f56db87d3" containerID="b71341b506c6fde3668d225099185895891e9697836992f67b55dd78d5cbf49b" exitCode=0 Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.720228 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2bmkk" event={"ID":"027edf40-c863-46a8-8950-322f56db87d3","Type":"ContainerDied","Data":"b71341b506c6fde3668d225099185895891e9697836992f67b55dd78d5cbf49b"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.723562 4970 generic.go:334] "Generic (PLEG): container finished" podID="222db933-1bf5-4df0-aa84-362453b9ba35" containerID="9c1cf379934d31e88480a5ea3214360b781f8a8077693ee79346b6b851871483" exitCode=0 Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.723598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cltzd" event={"ID":"222db933-1bf5-4df0-aa84-362453b9ba35","Type":"ContainerDied","Data":"9c1cf379934d31e88480a5ea3214360b781f8a8077693ee79346b6b851871483"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.726626 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80bba95d-00c0-48b9-aac2-c6fbd912ca8c","Type":"ContainerStarted","Data":"10b87e9954813ba006dafc7d946167587c53c4de5a50dbcfbe6058013703f32d"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.733736 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e179577e-4bb5-4486-9c3f-536899514d12","Type":"ContainerStarted","Data":"f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.733795 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e179577e-4bb5-4486-9c3f-536899514d12","Type":"ContainerStarted","Data":"64fac7de2b140ccf6e824238109f842d756b5d6abc1af865f0030a42e1e12385"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.742955 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" event={"ID":"dabcded3-fd37-4349-b3dc-98b143771dca","Type":"ContainerStarted","Data":"8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.743205 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" containerName="dnsmasq-dns" containerID="cri-o://8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a" gracePeriod=10 Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.743549 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.747191 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b1f388d-9583-482f-9089-714ba98cafff" containerID="9759fa5203ef63cf898ad0a1d8defd98b53d0467c3a71c53cfaf3f99e78252ab" exitCode=0 Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.747264 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" event={"ID":"0b1f388d-9583-482f-9089-714ba98cafff","Type":"ContainerDied","Data":"9759fa5203ef63cf898ad0a1d8defd98b53d0467c3a71c53cfaf3f99e78252ab"} Dec 09 12:28:38 crc kubenswrapper[4970]: I1209 12:28:38.806293 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" podStartSLOduration=30.806222323 podStartE2EDuration="30.806222323s" podCreationTimestamp="2025-12-09 12:28:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:38.804984561 +0000 UTC m=+1331.365465622" watchObservedRunningTime="2025-12-09 12:28:38.806222323 +0000 UTC m=+1331.366703374" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.430582 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.508019 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-swift-storage-0\") pod \"dabcded3-fd37-4349-b3dc-98b143771dca\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.508086 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-sb\") pod \"dabcded3-fd37-4349-b3dc-98b143771dca\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.508229 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8krxb\" (UniqueName: \"kubernetes.io/projected/dabcded3-fd37-4349-b3dc-98b143771dca-kube-api-access-8krxb\") pod \"dabcded3-fd37-4349-b3dc-98b143771dca\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.508350 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-svc\") pod \"dabcded3-fd37-4349-b3dc-98b143771dca\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.508436 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-config\") pod \"dabcded3-fd37-4349-b3dc-98b143771dca\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.509552 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-nb\") pod \"dabcded3-fd37-4349-b3dc-98b143771dca\" (UID: \"dabcded3-fd37-4349-b3dc-98b143771dca\") " Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.519516 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabcded3-fd37-4349-b3dc-98b143771dca-kube-api-access-8krxb" (OuterVolumeSpecName: "kube-api-access-8krxb") pod "dabcded3-fd37-4349-b3dc-98b143771dca" (UID: "dabcded3-fd37-4349-b3dc-98b143771dca"). InnerVolumeSpecName "kube-api-access-8krxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.589974 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dabcded3-fd37-4349-b3dc-98b143771dca" (UID: "dabcded3-fd37-4349-b3dc-98b143771dca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.614398 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8krxb\" (UniqueName: \"kubernetes.io/projected/dabcded3-fd37-4349-b3dc-98b143771dca-kube-api-access-8krxb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.614458 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.617493 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dabcded3-fd37-4349-b3dc-98b143771dca" (UID: "dabcded3-fd37-4349-b3dc-98b143771dca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.631123 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dabcded3-fd37-4349-b3dc-98b143771dca" (UID: "dabcded3-fd37-4349-b3dc-98b143771dca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.639155 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-config" (OuterVolumeSpecName: "config") pod "dabcded3-fd37-4349-b3dc-98b143771dca" (UID: "dabcded3-fd37-4349-b3dc-98b143771dca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.639918 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dabcded3-fd37-4349-b3dc-98b143771dca" (UID: "dabcded3-fd37-4349-b3dc-98b143771dca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.716393 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.716444 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.716454 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:39 crc kubenswrapper[4970]: I1209 12:28:39.716463 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dabcded3-fd37-4349-b3dc-98b143771dca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.672320 4970 generic.go:334] "Generic (PLEG): container finished" podID="dabcded3-fd37-4349-b3dc-98b143771dca" containerID="8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a" exitCode=0 Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.672511 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.672514 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" event={"ID":"dabcded3-fd37-4349-b3dc-98b143771dca","Type":"ContainerDied","Data":"8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a"} Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.678918 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-w27wz" event={"ID":"dabcded3-fd37-4349-b3dc-98b143771dca","Type":"ContainerDied","Data":"40b39b9ab605363c78ef00a56e235b86be6e551a6802b4ca4e4fe8f9a3a48988"} Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.678968 4970 scope.go:117] "RemoveContainer" containerID="8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.687027 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" event={"ID":"0b1f388d-9583-482f-9089-714ba98cafff","Type":"ContainerStarted","Data":"18e8126cec41af8171cf75aecdf41db0f438ad6dcc4c315a2ebe64e03d5bc1bc"} Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.687622 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.702867 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80bba95d-00c0-48b9-aac2-c6fbd912ca8c","Type":"ContainerStarted","Data":"63ed8e2f0edc7129a31e9a9b85b52c1a251093db83b1661c0d0dd7a11fce25f6"} Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.708258 4970 scope.go:117] "RemoveContainer" containerID="acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.720508 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-w27wz"] Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.735454 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-w27wz"] Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.739901 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" podStartSLOduration=5.739875051 podStartE2EDuration="5.739875051s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:40.722475999 +0000 UTC m=+1333.282957050" watchObservedRunningTime="2025-12-09 12:28:40.739875051 +0000 UTC m=+1333.300356102" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.783536 4970 scope.go:117] "RemoveContainer" containerID="8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a" Dec 09 12:28:40 crc kubenswrapper[4970]: E1209 12:28:40.784310 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a\": container with ID starting with 8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a not found: ID does not exist" containerID="8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.784353 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a"} err="failed to get container status \"8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a\": rpc error: code = NotFound desc = could not find container \"8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a\": container with ID starting with 8b0516ed2c3dc45f9ba69561f9dc08fe8ea1e5929348a534d7ca241704a0388a not found: ID does not exist" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.784378 4970 scope.go:117] "RemoveContainer" containerID="acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b" Dec 09 12:28:40 crc kubenswrapper[4970]: E1209 12:28:40.784947 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b\": container with ID starting with acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b not found: ID does not exist" containerID="acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b" Dec 09 12:28:40 crc kubenswrapper[4970]: I1209 12:28:40.784997 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b"} err="failed to get container status \"acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b\": rpc error: code = NotFound desc = could not find container \"acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b\": container with ID starting with acffd8acfa4445cb6e559c6e9d20e76bb44d712b21f4e2120500781050c0ae9b not found: ID does not exist" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.208174 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.254225 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-scripts\") pod \"027edf40-c863-46a8-8950-322f56db87d3\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.254582 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-combined-ca-bundle\") pod \"027edf40-c863-46a8-8950-322f56db87d3\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.254697 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027edf40-c863-46a8-8950-322f56db87d3-logs\") pod \"027edf40-c863-46a8-8950-322f56db87d3\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.254796 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn5fm\" (UniqueName: \"kubernetes.io/projected/027edf40-c863-46a8-8950-322f56db87d3-kube-api-access-cn5fm\") pod \"027edf40-c863-46a8-8950-322f56db87d3\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.254960 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-config-data\") pod \"027edf40-c863-46a8-8950-322f56db87d3\" (UID: \"027edf40-c863-46a8-8950-322f56db87d3\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.255629 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/027edf40-c863-46a8-8950-322f56db87d3-logs" (OuterVolumeSpecName: "logs") pod "027edf40-c863-46a8-8950-322f56db87d3" (UID: "027edf40-c863-46a8-8950-322f56db87d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.271492 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/027edf40-c863-46a8-8950-322f56db87d3-kube-api-access-cn5fm" (OuterVolumeSpecName: "kube-api-access-cn5fm") pod "027edf40-c863-46a8-8950-322f56db87d3" (UID: "027edf40-c863-46a8-8950-322f56db87d3"). InnerVolumeSpecName "kube-api-access-cn5fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.295147 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-config-data" (OuterVolumeSpecName: "config-data") pod "027edf40-c863-46a8-8950-322f56db87d3" (UID: "027edf40-c863-46a8-8950-322f56db87d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.296777 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-scripts" (OuterVolumeSpecName: "scripts") pod "027edf40-c863-46a8-8950-322f56db87d3" (UID: "027edf40-c863-46a8-8950-322f56db87d3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.298549 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.299535 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "027edf40-c863-46a8-8950-322f56db87d3" (UID: "027edf40-c863-46a8-8950-322f56db87d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.356779 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-combined-ca-bundle\") pod \"222db933-1bf5-4df0-aa84-362453b9ba35\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.357218 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-config\") pod \"222db933-1bf5-4df0-aa84-362453b9ba35\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.357382 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg2bl\" (UniqueName: \"kubernetes.io/projected/222db933-1bf5-4df0-aa84-362453b9ba35-kube-api-access-xg2bl\") pod \"222db933-1bf5-4df0-aa84-362453b9ba35\" (UID: \"222db933-1bf5-4df0-aa84-362453b9ba35\") " Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.358034 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.358061 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.358073 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/027edf40-c863-46a8-8950-322f56db87d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.358089 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/027edf40-c863-46a8-8950-322f56db87d3-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.358102 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn5fm\" (UniqueName: \"kubernetes.io/projected/027edf40-c863-46a8-8950-322f56db87d3-kube-api-access-cn5fm\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.360856 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222db933-1bf5-4df0-aa84-362453b9ba35-kube-api-access-xg2bl" (OuterVolumeSpecName: "kube-api-access-xg2bl") pod "222db933-1bf5-4df0-aa84-362453b9ba35" (UID: "222db933-1bf5-4df0-aa84-362453b9ba35"). InnerVolumeSpecName "kube-api-access-xg2bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.390798 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-config" (OuterVolumeSpecName: "config") pod "222db933-1bf5-4df0-aa84-362453b9ba35" (UID: "222db933-1bf5-4df0-aa84-362453b9ba35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.391289 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "222db933-1bf5-4df0-aa84-362453b9ba35" (UID: "222db933-1bf5-4df0-aa84-362453b9ba35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.460791 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.460826 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/222db933-1bf5-4df0-aa84-362453b9ba35-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.460839 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg2bl\" (UniqueName: \"kubernetes.io/projected/222db933-1bf5-4df0-aa84-362453b9ba35-kube-api-access-xg2bl\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.725986 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2bmkk" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.725993 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2bmkk" event={"ID":"027edf40-c863-46a8-8950-322f56db87d3","Type":"ContainerDied","Data":"14bedf93f7d398c6608b9f6cf8fee4fa34ac5ed4733b2a9efae9bf754dc57911"} Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.726048 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14bedf93f7d398c6608b9f6cf8fee4fa34ac5ed4733b2a9efae9bf754dc57911" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.728003 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cltzd" event={"ID":"222db933-1bf5-4df0-aa84-362453b9ba35","Type":"ContainerDied","Data":"e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46"} Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.728042 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.728068 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cltzd" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.732132 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80bba95d-00c0-48b9-aac2-c6fbd912ca8c","Type":"ContainerStarted","Data":"e3639f45ef12a8d4489db0b05b7bd2d0284a87ca57a949e8a2c9e0c91aa1a192"} Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.732319 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-log" containerID="cri-o://63ed8e2f0edc7129a31e9a9b85b52c1a251093db83b1661c0d0dd7a11fce25f6" gracePeriod=30 Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.732920 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-httpd" containerID="cri-o://e3639f45ef12a8d4489db0b05b7bd2d0284a87ca57a949e8a2c9e0c91aa1a192" gracePeriod=30 Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.748145 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-log" containerID="cri-o://f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8" gracePeriod=30 Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.748535 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e179577e-4bb5-4486-9c3f-536899514d12","Type":"ContainerStarted","Data":"ebb6dc5a44a6da8c2c6992c50847e11fd7ca2db337a055a9e9df71b4a634c80b"} Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.748623 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-httpd" containerID="cri-o://ebb6dc5a44a6da8c2c6992c50847e11fd7ca2db337a055a9e9df71b4a634c80b" gracePeriod=30 Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.793535 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.793509007 podStartE2EDuration="6.793509007s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:41.782039813 +0000 UTC m=+1334.342520864" watchObservedRunningTime="2025-12-09 12:28:41.793509007 +0000 UTC m=+1334.353990058" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.863362 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" path="/var/lib/kubelet/pods/dabcded3-fd37-4349-b3dc-98b143771dca/volumes" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.882192 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.882170219 podStartE2EDuration="6.882170219s" podCreationTimestamp="2025-12-09 12:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:41.856754765 +0000 UTC m=+1334.417235816" watchObservedRunningTime="2025-12-09 12:28:41.882170219 +0000 UTC m=+1334.442651270" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.943805 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 12:28:41 crc kubenswrapper[4970]: I1209 12:28:41.970738 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 12:28:42 crc kubenswrapper[4970]: E1209 12:28:42.105911 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod222db933_1bf5_4df0_aa84_362453b9ba35.slice/crio-e0489df3f52b0fa5846c43649f2603cfdf37b5e94e03e278d18bd23f2fddff46\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod222db933_1bf5_4df0_aa84_362453b9ba35.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode179577e_4bb5_4486_9c3f_536899514d12.slice/crio-f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode179577e_4bb5_4486_9c3f_536899514d12.slice/crio-conmon-f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80bba95d_00c0_48b9_aac2_c6fbd912ca8c.slice/crio-conmon-63ed8e2f0edc7129a31e9a9b85b52c1a251093db83b1661c0d0dd7a11fce25f6.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.335919 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78687fd956-j7ckc"] Dec 09 12:28:42 crc kubenswrapper[4970]: E1209 12:28:42.336476 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" containerName="dnsmasq-dns" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336500 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" containerName="dnsmasq-dns" Dec 09 12:28:42 crc kubenswrapper[4970]: E1209 12:28:42.336519 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" containerName="init" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336528 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" containerName="init" Dec 09 12:28:42 crc kubenswrapper[4970]: E1209 12:28:42.336538 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222db933-1bf5-4df0-aa84-362453b9ba35" containerName="neutron-db-sync" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336546 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="222db933-1bf5-4df0-aa84-362453b9ba35" containerName="neutron-db-sync" Dec 09 12:28:42 crc kubenswrapper[4970]: E1209 12:28:42.336572 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="027edf40-c863-46a8-8950-322f56db87d3" containerName="placement-db-sync" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336581 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="027edf40-c863-46a8-8950-322f56db87d3" containerName="placement-db-sync" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336841 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabcded3-fd37-4349-b3dc-98b143771dca" containerName="dnsmasq-dns" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336868 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="222db933-1bf5-4df0-aa84-362453b9ba35" containerName="neutron-db-sync" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.336887 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="027edf40-c863-46a8-8950-322f56db87d3" containerName="placement-db-sync" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.338474 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.343541 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.343902 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.344119 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.344171 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.348349 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-hqs7r" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.362089 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78687fd956-j7ckc"] Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498631 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-config-data\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498742 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-internal-tls-certs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498784 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-scripts\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498815 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrnd\" (UniqueName: \"kubernetes.io/projected/fa1854f6-8032-4a50-8808-cbd83782deb5-kube-api-access-pjrnd\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498842 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-public-tls-certs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498898 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa1854f6-8032-4a50-8808-cbd83782deb5-logs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.498938 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-combined-ca-bundle\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.523960 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzhss"] Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.541745 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hh2d8"] Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.544122 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.612814 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hh2d8"] Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.631049 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-config-data\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.631111 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.631148 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-config\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.631167 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.631220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-internal-tls-certs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642569 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-scripts\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642630 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrnd\" (UniqueName: \"kubernetes.io/projected/fa1854f6-8032-4a50-8808-cbd83782deb5-kube-api-access-pjrnd\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-public-tls-certs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642729 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa1854f6-8032-4a50-8808-cbd83782deb5-logs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642757 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642799 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-combined-ca-bundle\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.642869 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbj2h\" (UniqueName: \"kubernetes.io/projected/e7bb9b20-9449-48b4-b3ba-dc547cf39558-kube-api-access-xbj2h\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.645024 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.648687 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa1854f6-8032-4a50-8808-cbd83782deb5-logs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.663479 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-scripts\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.663495 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6d7d95f968-k2gh2"] Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.666980 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-internal-tls-certs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.667531 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-combined-ca-bundle\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.668154 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.676722 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.677046 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.677357 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.677594 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-xrdl8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.679275 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-config-data\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.695153 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa1854f6-8032-4a50-8808-cbd83782deb5-public-tls-certs\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.697500 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d7d95f968-k2gh2"] Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.699132 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrnd\" (UniqueName: \"kubernetes.io/projected/fa1854f6-8032-4a50-8808-cbd83782deb5-kube-api-access-pjrnd\") pod \"placement-78687fd956-j7ckc\" (UID: \"fa1854f6-8032-4a50-8808-cbd83782deb5\") " pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.746969 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbj2h\" (UniqueName: \"kubernetes.io/projected/e7bb9b20-9449-48b4-b3ba-dc547cf39558-kube-api-access-xbj2h\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747056 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-combined-ca-bundle\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747100 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747121 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-ovndb-tls-certs\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747197 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4v5q\" (UniqueName: \"kubernetes.io/projected/7d34e443-c923-42f0-83be-3d060424380b-kube-api-access-r4v5q\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747235 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-config\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747281 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-httpd-config\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747317 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747342 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-config\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.747440 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.749572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.750514 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-config\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.751064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.751334 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.752173 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.774420 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbj2h\" (UniqueName: \"kubernetes.io/projected/e7bb9b20-9449-48b4-b3ba-dc547cf39558-kube-api-access-xbj2h\") pod \"dnsmasq-dns-55f844cf75-hh2d8\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.822015 4970 generic.go:334] "Generic (PLEG): container finished" podID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerID="e3639f45ef12a8d4489db0b05b7bd2d0284a87ca57a949e8a2c9e0c91aa1a192" exitCode=0 Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.822045 4970 generic.go:334] "Generic (PLEG): container finished" podID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerID="63ed8e2f0edc7129a31e9a9b85b52c1a251093db83b1661c0d0dd7a11fce25f6" exitCode=143 Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.822110 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80bba95d-00c0-48b9-aac2-c6fbd912ca8c","Type":"ContainerDied","Data":"e3639f45ef12a8d4489db0b05b7bd2d0284a87ca57a949e8a2c9e0c91aa1a192"} Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.822161 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80bba95d-00c0-48b9-aac2-c6fbd912ca8c","Type":"ContainerDied","Data":"63ed8e2f0edc7129a31e9a9b85b52c1a251093db83b1661c0d0dd7a11fce25f6"} Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.848030 4970 generic.go:334] "Generic (PLEG): container finished" podID="e179577e-4bb5-4486-9c3f-536899514d12" containerID="ebb6dc5a44a6da8c2c6992c50847e11fd7ca2db337a055a9e9df71b4a634c80b" exitCode=0 Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.848061 4970 generic.go:334] "Generic (PLEG): container finished" podID="e179577e-4bb5-4486-9c3f-536899514d12" containerID="f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8" exitCode=143 Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.848119 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e179577e-4bb5-4486-9c3f-536899514d12","Type":"ContainerDied","Data":"ebb6dc5a44a6da8c2c6992c50847e11fd7ca2db337a055a9e9df71b4a634c80b"} Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.848165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e179577e-4bb5-4486-9c3f-536899514d12","Type":"ContainerDied","Data":"f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8"} Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.848283 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="dnsmasq-dns" containerID="cri-o://18e8126cec41af8171cf75aecdf41db0f438ad6dcc4c315a2ebe64e03d5bc1bc" gracePeriod=10 Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.849198 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-config\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.849350 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-httpd-config\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.849599 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-combined-ca-bundle\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.849725 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-ovndb-tls-certs\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.850164 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4v5q\" (UniqueName: \"kubernetes.io/projected/7d34e443-c923-42f0-83be-3d060424380b-kube-api-access-r4v5q\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.854470 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.857770 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-ovndb-tls-certs\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.859614 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-config\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.864687 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-combined-ca-bundle\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.868095 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-httpd-config\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.869178 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4v5q\" (UniqueName: \"kubernetes.io/projected/7d34e443-c923-42f0-83be-3d060424380b-kube-api-access-r4v5q\") pod \"neutron-6d7d95f968-k2gh2\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.915546 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:42 crc kubenswrapper[4970]: I1209 12:28:42.980615 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:43 crc kubenswrapper[4970]: I1209 12:28:43.133501 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:43 crc kubenswrapper[4970]: I1209 12:28:43.874278 4970 generic.go:334] "Generic (PLEG): container finished" podID="41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" containerID="20001436ea007dee95f34238e1d1cdaa96eef1b75c0520dd4a1fb3b08b17a534" exitCode=0 Dec 09 12:28:43 crc kubenswrapper[4970]: I1209 12:28:43.874359 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kwdc" event={"ID":"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4","Type":"ContainerDied","Data":"20001436ea007dee95f34238e1d1cdaa96eef1b75c0520dd4a1fb3b08b17a534"} Dec 09 12:28:43 crc kubenswrapper[4970]: I1209 12:28:43.884082 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b1f388d-9583-482f-9089-714ba98cafff" containerID="18e8126cec41af8171cf75aecdf41db0f438ad6dcc4c315a2ebe64e03d5bc1bc" exitCode=0 Dec 09 12:28:43 crc kubenswrapper[4970]: I1209 12:28:43.884326 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" event={"ID":"0b1f388d-9583-482f-9089-714ba98cafff","Type":"ContainerDied","Data":"18e8126cec41af8171cf75aecdf41db0f438ad6dcc4c315a2ebe64e03d5bc1bc"} Dec 09 12:28:44 crc kubenswrapper[4970]: I1209 12:28:44.853953 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dd4c8c86f-7zktb"] Dec 09 12:28:44 crc kubenswrapper[4970]: I1209 12:28:44.856537 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:44 crc kubenswrapper[4970]: I1209 12:28:44.860014 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 09 12:28:44 crc kubenswrapper[4970]: I1209 12:28:44.867990 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 09 12:28:44 crc kubenswrapper[4970]: I1209 12:28:44.868780 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dd4c8c86f-7zktb"] Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011011 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-public-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-httpd-config\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011277 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w28c\" (UniqueName: \"kubernetes.io/projected/03bc9e94-3198-4707-b3b4-19ee20b49d4d-kube-api-access-2w28c\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011354 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-combined-ca-bundle\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011406 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-ovndb-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-config\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.011536 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-internal-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.113781 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-httpd-config\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.115231 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w28c\" (UniqueName: \"kubernetes.io/projected/03bc9e94-3198-4707-b3b4-19ee20b49d4d-kube-api-access-2w28c\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.115318 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-combined-ca-bundle\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.115902 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-ovndb-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.116197 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-config\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.116782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-internal-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.116951 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-public-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.125309 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-httpd-config\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.128067 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-public-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.132390 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-config\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.134359 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-combined-ca-bundle\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.134408 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w28c\" (UniqueName: \"kubernetes.io/projected/03bc9e94-3198-4707-b3b4-19ee20b49d4d-kube-api-access-2w28c\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.149035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-internal-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.149733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03bc9e94-3198-4707-b3b4-19ee20b49d4d-ovndb-tls-certs\") pod \"neutron-dd4c8c86f-7zktb\" (UID: \"03bc9e94-3198-4707-b3b4-19ee20b49d4d\") " pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:45 crc kubenswrapper[4970]: I1209 12:28:45.186760 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.011407 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.012076 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.012131 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.013781 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb0f9b4763d3228bb2f722a577d4a09c1556ae0fa1f243c7931b87527311654a"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.013842 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://cb0f9b4763d3228bb2f722a577d4a09c1556ae0fa1f243c7931b87527311654a" gracePeriod=600 Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.586159 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.623797 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760025 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-swift-storage-0\") pod \"0b1f388d-9583-482f-9089-714ba98cafff\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760332 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-config\") pod \"0b1f388d-9583-482f-9089-714ba98cafff\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760397 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-combined-ca-bundle\") pod \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760424 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-scripts\") pod \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760482 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcxpj\" (UniqueName: \"kubernetes.io/projected/0b1f388d-9583-482f-9089-714ba98cafff-kube-api-access-vcxpj\") pod \"0b1f388d-9583-482f-9089-714ba98cafff\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760562 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-fernet-keys\") pod \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760632 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfmmz\" (UniqueName: \"kubernetes.io/projected/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-kube-api-access-rfmmz\") pod \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760699 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-svc\") pod \"0b1f388d-9583-482f-9089-714ba98cafff\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760728 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-nb\") pod \"0b1f388d-9583-482f-9089-714ba98cafff\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760785 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-credential-keys\") pod \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760881 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-config-data\") pod \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\" (UID: \"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.760914 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-sb\") pod \"0b1f388d-9583-482f-9089-714ba98cafff\" (UID: \"0b1f388d-9583-482f-9089-714ba98cafff\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.767552 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-kube-api-access-rfmmz" (OuterVolumeSpecName: "kube-api-access-rfmmz") pod "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" (UID: "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4"). InnerVolumeSpecName "kube-api-access-rfmmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.769430 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" (UID: "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.798389 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-scripts" (OuterVolumeSpecName: "scripts") pod "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" (UID: "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.800229 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b1f388d-9583-482f-9089-714ba98cafff-kube-api-access-vcxpj" (OuterVolumeSpecName: "kube-api-access-vcxpj") pod "0b1f388d-9583-482f-9089-714ba98cafff" (UID: "0b1f388d-9583-482f-9089-714ba98cafff"). InnerVolumeSpecName "kube-api-access-vcxpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.802840 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" (UID: "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.828210 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" (UID: "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.830393 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-config-data" (OuterVolumeSpecName: "config-data") pod "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" (UID: "41a40ba0-3b8c-4068-9c2c-1e07dfba18a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.837063 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0b1f388d-9583-482f-9089-714ba98cafff" (UID: "0b1f388d-9583-482f-9089-714ba98cafff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.841969 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0b1f388d-9583-482f-9089-714ba98cafff" (UID: "0b1f388d-9583-482f-9089-714ba98cafff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.843817 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-config" (OuterVolumeSpecName: "config") pod "0b1f388d-9583-482f-9089-714ba98cafff" (UID: "0b1f388d-9583-482f-9089-714ba98cafff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.849962 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0b1f388d-9583-482f-9089-714ba98cafff" (UID: "0b1f388d-9583-482f-9089-714ba98cafff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.851779 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0b1f388d-9583-482f-9089-714ba98cafff" (UID: "0b1f388d-9583-482f-9089-714ba98cafff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865137 4970 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865160 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865172 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865186 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865198 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865208 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865217 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865233 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcxpj\" (UniqueName: \"kubernetes.io/projected/0b1f388d-9583-482f-9089-714ba98cafff-kube-api-access-vcxpj\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865381 4970 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865395 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfmmz\" (UniqueName: \"kubernetes.io/projected/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4-kube-api-access-rfmmz\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865408 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.865421 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0b1f388d-9583-482f-9089-714ba98cafff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.877607 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966436 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-combined-ca-bundle\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966483 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-httpd-run\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966521 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-logs\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966557 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-scripts\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966677 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966706 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcsmf\" (UniqueName: \"kubernetes.io/projected/e179577e-4bb5-4486-9c3f-536899514d12-kube-api-access-fcsmf\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.966792 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-config-data\") pod \"e179577e-4bb5-4486-9c3f-536899514d12\" (UID: \"e179577e-4bb5-4486-9c3f-536899514d12\") " Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.969096 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-logs" (OuterVolumeSpecName: "logs") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.969568 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.974784 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="cb0f9b4763d3228bb2f722a577d4a09c1556ae0fa1f243c7931b87527311654a" exitCode=0 Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.974851 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"cb0f9b4763d3228bb2f722a577d4a09c1556ae0fa1f243c7931b87527311654a"} Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.975016 4970 scope.go:117] "RemoveContainer" containerID="956b314977002e8d06761bbcdccd0bb4775a0aa2c665b4316e98475f27106ef3" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.975816 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.975846 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e179577e-4bb5-4486-9c3f-536899514d12-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.981686 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e179577e-4bb5-4486-9c3f-536899514d12-kube-api-access-fcsmf" (OuterVolumeSpecName: "kube-api-access-fcsmf") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "kube-api-access-fcsmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.981703 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-scripts" (OuterVolumeSpecName: "scripts") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.988607 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.990006 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.990344 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e179577e-4bb5-4486-9c3f-536899514d12","Type":"ContainerDied","Data":"64fac7de2b140ccf6e824238109f842d756b5d6abc1af865f0030a42e1e12385"} Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.993772 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4kwdc" event={"ID":"41a40ba0-3b8c-4068-9c2c-1e07dfba18a4","Type":"ContainerDied","Data":"30446614af73be2a40d563d1e655912d4ca82e50311ac75dcfe482fadf05036c"} Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.993813 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30446614af73be2a40d563d1e655912d4ca82e50311ac75dcfe482fadf05036c" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.993869 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4kwdc" Dec 09 12:28:46 crc kubenswrapper[4970]: I1209 12:28:46.994412 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78687fd956-j7ckc"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.004657 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" event={"ID":"0b1f388d-9583-482f-9089-714ba98cafff","Type":"ContainerDied","Data":"9a583ad132173070ceb4f17acb01432700559f75117b82ec07ee2b753ab12fac"} Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.004920 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.020605 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.054021 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-config-data" (OuterVolumeSpecName: "config-data") pod "e179577e-4bb5-4486-9c3f-536899514d12" (UID: "e179577e-4bb5-4486-9c3f-536899514d12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.057387 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzhss"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.073953 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-gzhss"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.077737 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.077769 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.077801 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.077811 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcsmf\" (UniqueName: \"kubernetes.io/projected/e179577e-4bb5-4486-9c3f-536899514d12-kube-api-access-fcsmf\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.077820 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e179577e-4bb5-4486-9c3f-536899514d12-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.109307 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6d7d95f968-k2gh2"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.137864 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 09 12:28:47 crc kubenswrapper[4970]: W1209 12:28:47.146440 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d34e443_c923_42f0_83be_3d060424380b.slice/crio-3d059000e0f7ef3d148cf33997353fd5710a9f1591a14b4dd6429c0ea0511ef4 WatchSource:0}: Error finding container 3d059000e0f7ef3d148cf33997353fd5710a9f1591a14b4dd6429c0ea0511ef4: Status 404 returned error can't find the container with id 3d059000e0f7ef3d148cf33997353fd5710a9f1591a14b4dd6429c0ea0511ef4 Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.189887 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.211300 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hh2d8"] Dec 09 12:28:47 crc kubenswrapper[4970]: W1209 12:28:47.219517 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7bb9b20_9449_48b4_b3ba_dc547cf39558.slice/crio-481cd5910ecd9f792951d354066695b858d1c521dbb9525ce057c24440e68a9b WatchSource:0}: Error finding container 481cd5910ecd9f792951d354066695b858d1c521dbb9525ce057c24440e68a9b: Status 404 returned error can't find the container with id 481cd5910ecd9f792951d354066695b858d1c521dbb9525ce057c24440e68a9b Dec 09 12:28:47 crc kubenswrapper[4970]: W1209 12:28:47.292559 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03bc9e94_3198_4707_b3b4_19ee20b49d4d.slice/crio-763f49c25b04556b38d20a1452559cdb44ba9fc329b0edee41935b1dae88b612 WatchSource:0}: Error finding container 763f49c25b04556b38d20a1452559cdb44ba9fc329b0edee41935b1dae88b612: Status 404 returned error can't find the container with id 763f49c25b04556b38d20a1452559cdb44ba9fc329b0edee41935b1dae88b612 Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.295727 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dd4c8c86f-7zktb"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.695751 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-565ff75dc9-922w2"] Dec 09 12:28:47 crc kubenswrapper[4970]: E1209 12:28:47.696520 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" containerName="keystone-bootstrap" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696544 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" containerName="keystone-bootstrap" Dec 09 12:28:47 crc kubenswrapper[4970]: E1209 12:28:47.696570 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-httpd" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696579 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-httpd" Dec 09 12:28:47 crc kubenswrapper[4970]: E1209 12:28:47.696611 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-log" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696619 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-log" Dec 09 12:28:47 crc kubenswrapper[4970]: E1209 12:28:47.696633 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="dnsmasq-dns" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696640 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="dnsmasq-dns" Dec 09 12:28:47 crc kubenswrapper[4970]: E1209 12:28:47.696670 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="init" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696678 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="init" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696932 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="dnsmasq-dns" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696959 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" containerName="keystone-bootstrap" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.696979 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-log" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.697010 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e179577e-4bb5-4486-9c3f-536899514d12" containerName="glance-httpd" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.697949 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.697961 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.699759 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.699797 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.700052 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.700065 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.700192 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-m6jjm" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.700194 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.745333 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-565ff75dc9-922w2"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.800506 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.801892 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-combined-ca-bundle\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.801975 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klf7n\" (UniqueName: \"kubernetes.io/projected/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-kube-api-access-klf7n\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.802055 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.802263 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-scripts\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.802360 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-logs\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.802396 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-httpd-run\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.802459 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-config-data\") pod \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\" (UID: \"80bba95d-00c0-48b9-aac2-c6fbd912ca8c\") " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.802978 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-logs" (OuterVolumeSpecName: "logs") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.803177 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.804648 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-credential-keys\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.804970 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-config-data\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805092 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-fernet-keys\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-scripts\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805230 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-combined-ca-bundle\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805308 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrzmj\" (UniqueName: \"kubernetes.io/projected/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-kube-api-access-jrzmj\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805415 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-public-tls-certs\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-internal-tls-certs\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805631 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.805649 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.825077 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.825475 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-scripts" (OuterVolumeSpecName: "scripts") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.827513 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-kube-api-access-klf7n" (OuterVolumeSpecName: "kube-api-access-klf7n") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "kube-api-access-klf7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.852858 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b1f388d-9583-482f-9089-714ba98cafff" path="/var/lib/kubelet/pods/0b1f388d-9583-482f-9089-714ba98cafff/volumes" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907022 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-public-tls-certs\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907082 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-internal-tls-certs\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907171 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-credential-keys\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-config-data\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907332 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-fernet-keys\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907383 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-scripts\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907413 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-combined-ca-bundle\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907447 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrzmj\" (UniqueName: \"kubernetes.io/projected/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-kube-api-access-jrzmj\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907536 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klf7n\" (UniqueName: \"kubernetes.io/projected/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-kube-api-access-klf7n\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907569 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.907582 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.918021 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-public-tls-certs\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.920903 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-config-data\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.923223 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-internal-tls-certs\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.923629 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-scripts\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.925055 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-combined-ca-bundle\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.927196 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-fernet-keys\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.929560 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-credential-keys\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.940820 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrzmj\" (UniqueName: \"kubernetes.io/projected/c2aa972b-8e19-46ea-b7e5-8b302c81dc0a-kube-api-access-jrzmj\") pod \"keystone-565ff75dc9-922w2\" (UID: \"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a\") " pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.981668 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.999683 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:47 crc kubenswrapper[4970]: I1209 12:28:47.999726 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:48 crc kubenswrapper[4970]: E1209 12:28:48.000178 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-log" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.000197 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-log" Dec 09 12:28:48 crc kubenswrapper[4970]: E1209 12:28:48.000220 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-httpd" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.000228 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-httpd" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.000680 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-log" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.000720 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" containerName="glance-httpd" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.001749 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.001839 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.007778 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.008478 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.010689 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.060284 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.067627 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" event={"ID":"e7bb9b20-9449-48b4-b3ba-dc547cf39558","Type":"ContainerStarted","Data":"481cd5910ecd9f792951d354066695b858d1c521dbb9525ce057c24440e68a9b"} Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.078404 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d7d95f968-k2gh2" event={"ID":"7d34e443-c923-42f0-83be-3d060424380b","Type":"ContainerStarted","Data":"3d059000e0f7ef3d148cf33997353fd5710a9f1591a14b4dd6429c0ea0511ef4"} Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.079594 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.092545 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"80bba95d-00c0-48b9-aac2-c6fbd912ca8c","Type":"ContainerDied","Data":"10b87e9954813ba006dafc7d946167587c53c4de5a50dbcfbe6058013703f32d"} Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.092670 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.120991 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78687fd956-j7ckc" event={"ID":"fa1854f6-8032-4a50-8808-cbd83782deb5","Type":"ContainerStarted","Data":"baf5532dcc46ac9dc49097d880639092e481d2393c63e36af5748eb2f2303c24"} Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.127837 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.127883 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.127937 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.128095 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.128122 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n8pj\" (UniqueName: \"kubernetes.io/projected/35f7dc61-35aa-42a4-adc2-e96f07905cd0-kube-api-access-5n8pj\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.128155 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.128176 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-logs\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.128210 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.147393 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd4c8c86f-7zktb" event={"ID":"03bc9e94-3198-4707-b3b4-19ee20b49d4d","Type":"ContainerStarted","Data":"763f49c25b04556b38d20a1452559cdb44ba9fc329b0edee41935b1dae88b612"} Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.150881 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.177386 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-config-data" (OuterVolumeSpecName: "config-data") pod "80bba95d-00c0-48b9-aac2-c6fbd912ca8c" (UID: "80bba95d-00c0-48b9-aac2-c6fbd912ca8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252130 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252181 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5n8pj\" (UniqueName: \"kubernetes.io/projected/35f7dc61-35aa-42a4-adc2-e96f07905cd0-kube-api-access-5n8pj\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252228 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-logs\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252269 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252324 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252340 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252580 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.252708 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80bba95d-00c0-48b9-aac2-c6fbd912ca8c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.253743 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-logs\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.254260 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.255228 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.259478 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.264157 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.266634 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.276855 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.277159 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n8pj\" (UniqueName: \"kubernetes.io/projected/35f7dc61-35aa-42a4-adc2-e96f07905cd0-kube-api-access-5n8pj\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.313414 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.357619 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.519979 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.544509 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.567372 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.586144 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.589355 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.589745 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.629823 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.659905 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660002 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660027 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660136 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl24g\" (UniqueName: \"kubernetes.io/projected/0b28cb7f-3918-4f2d-ba79-21503540a126-kube-api-access-pl24g\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660275 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660310 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-logs\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.660339 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: W1209 12:28:48.740648 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2aa972b_8e19_46ea_b7e5_8b302c81dc0a.slice/crio-33f02e708fad49e21e78d48f9f72a27d5eaf3a0ccc076bdfcfcf224b6bc49983 WatchSource:0}: Error finding container 33f02e708fad49e21e78d48f9f72a27d5eaf3a0ccc076bdfcfcf224b6bc49983: Status 404 returned error can't find the container with id 33f02e708fad49e21e78d48f9f72a27d5eaf3a0ccc076bdfcfcf224b6bc49983 Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.741052 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-565ff75dc9-922w2"] Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.763976 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl24g\" (UniqueName: \"kubernetes.io/projected/0b28cb7f-3918-4f2d-ba79-21503540a126-kube-api-access-pl24g\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764041 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764140 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764185 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-logs\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764225 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764315 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764415 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764445 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.764988 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-logs\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.765317 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.765743 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.780682 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.781009 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-config-data\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.787543 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.794132 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-scripts\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.798033 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl24g\" (UniqueName: \"kubernetes.io/projected/0b28cb7f-3918-4f2d-ba79-21503540a126-kube-api-access-pl24g\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.852907 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.858170 4970 scope.go:117] "RemoveContainer" containerID="ebb6dc5a44a6da8c2c6992c50847e11fd7ca2db337a055a9e9df71b4a634c80b" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.928955 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:28:48 crc kubenswrapper[4970]: I1209 12:28:48.954125 4970 scope.go:117] "RemoveContainer" containerID="f5bd4178ec922f74cdb1fb051026c51137fcfc1c73c22f3a1b70e9dd5718b6f8" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.033589 4970 scope.go:117] "RemoveContainer" containerID="18e8126cec41af8171cf75aecdf41db0f438ad6dcc4c315a2ebe64e03d5bc1bc" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.042765 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.127409 4970 scope.go:117] "RemoveContainer" containerID="9759fa5203ef63cf898ad0a1d8defd98b53d0467c3a71c53cfaf3f99e78252ab" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.215396 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerStarted","Data":"33712a05c52efb3923a7013db45c86dc0ff29b8573aec03ad49da2f4d190b612"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.287356 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.309057 4970 generic.go:334] "Generic (PLEG): container finished" podID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerID="96968271128e8e14081dc2f1b7d817a9ec99a922f6a0acd7da5de5db02cba135" exitCode=0 Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.309199 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" event={"ID":"e7bb9b20-9449-48b4-b3ba-dc547cf39558","Type":"ContainerDied","Data":"96968271128e8e14081dc2f1b7d817a9ec99a922f6a0acd7da5de5db02cba135"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.318125 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35f7dc61-35aa-42a4-adc2-e96f07905cd0","Type":"ContainerStarted","Data":"9bbe05a3e2c97f157eb0f70367e20eed30cd807d622ddddbf99bf93c99503e72"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.373481 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d7d95f968-k2gh2" event={"ID":"7d34e443-c923-42f0-83be-3d060424380b","Type":"ContainerStarted","Data":"aba0aaf5bec05a42ffca748467a7096415b9012cde958ccc33c9498e506b800f"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.389793 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kg6zl" event={"ID":"3015f85e-5d86-4906-9d9a-8389330bcb82","Type":"ContainerStarted","Data":"a795b099d9de0c3443e1177afd2019e48a789a4b2d55d71ad9012c81cef0081c"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.407480 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78687fd956-j7ckc" event={"ID":"fa1854f6-8032-4a50-8808-cbd83782deb5","Type":"ContainerStarted","Data":"8e28b786054c3e0199738f97ac1eeeecca251ad03d6dcad046ff6a67d369c4b6"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.408575 4970 scope.go:117] "RemoveContainer" containerID="e3639f45ef12a8d4489db0b05b7bd2d0284a87ca57a949e8a2c9e0c91aa1a192" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.421070 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd4c8c86f-7zktb" event={"ID":"03bc9e94-3198-4707-b3b4-19ee20b49d4d","Type":"ContainerStarted","Data":"b7888944e69068483384342c6739cbeff7fd41653b305959a476ac01285da664"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.439892 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-kg6zl" podStartSLOduration=5.316288443 podStartE2EDuration="47.439865658s" podCreationTimestamp="2025-12-09 12:28:02 +0000 UTC" firstStartedPulling="2025-12-09 12:28:04.312430969 +0000 UTC m=+1296.872912010" lastFinishedPulling="2025-12-09 12:28:46.436008174 +0000 UTC m=+1338.996489225" observedRunningTime="2025-12-09 12:28:49.417825943 +0000 UTC m=+1341.978306994" watchObservedRunningTime="2025-12-09 12:28:49.439865658 +0000 UTC m=+1342.000346709" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.442469 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-565ff75dc9-922w2" event={"ID":"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a","Type":"ContainerStarted","Data":"33f02e708fad49e21e78d48f9f72a27d5eaf3a0ccc076bdfcfcf224b6bc49983"} Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.443660 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.668644 4970 scope.go:117] "RemoveContainer" containerID="63ed8e2f0edc7129a31e9a9b85b52c1a251093db83b1661c0d0dd7a11fce25f6" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.717463 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-565ff75dc9-922w2" podStartSLOduration=2.71743827 podStartE2EDuration="2.71743827s" podCreationTimestamp="2025-12-09 12:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:49.466754481 +0000 UTC m=+1342.027235532" watchObservedRunningTime="2025-12-09 12:28:49.71743827 +0000 UTC m=+1342.277919321" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.723794 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:28:49 crc kubenswrapper[4970]: W1209 12:28:49.745125 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b28cb7f_3918_4f2d_ba79_21503540a126.slice/crio-24f18f67cfa6b0514d264bfd61d930272a48f8a3bf25558250f6ec0ecbfa5843 WatchSource:0}: Error finding container 24f18f67cfa6b0514d264bfd61d930272a48f8a3bf25558250f6ec0ecbfa5843: Status 404 returned error can't find the container with id 24f18f67cfa6b0514d264bfd61d930272a48f8a3bf25558250f6ec0ecbfa5843 Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.830424 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bba95d-00c0-48b9-aac2-c6fbd912ca8c" path="/var/lib/kubelet/pods/80bba95d-00c0-48b9-aac2-c6fbd912ca8c/volumes" Dec 09 12:28:49 crc kubenswrapper[4970]: I1209 12:28:49.831573 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e179577e-4bb5-4486-9c3f-536899514d12" path="/var/lib/kubelet/pods/e179577e-4bb5-4486-9c3f-536899514d12/volumes" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.500350 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d7d95f968-k2gh2" event={"ID":"7d34e443-c923-42f0-83be-3d060424380b","Type":"ContainerStarted","Data":"4685b9c897e25961f58f4586a2048dd22f900e813863a1a95b7dffbbc9273650"} Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.501327 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.510956 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b28cb7f-3918-4f2d-ba79-21503540a126","Type":"ContainerStarted","Data":"24f18f67cfa6b0514d264bfd61d930272a48f8a3bf25558250f6ec0ecbfa5843"} Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.515856 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd4c8c86f-7zktb" event={"ID":"03bc9e94-3198-4707-b3b4-19ee20b49d4d","Type":"ContainerStarted","Data":"edc87e9bb22b1040d34b193576ffb30ee66f914596b644c756a1fd951ea4c0ce"} Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.516000 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.558630 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6d7d95f968-k2gh2" podStartSLOduration=8.558609011 podStartE2EDuration="8.558609011s" podCreationTimestamp="2025-12-09 12:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:50.520555272 +0000 UTC m=+1343.081036333" watchObservedRunningTime="2025-12-09 12:28:50.558609011 +0000 UTC m=+1343.119090062" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.574706 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dd4c8c86f-7zktb" podStartSLOduration=6.574685788 podStartE2EDuration="6.574685788s" podCreationTimestamp="2025-12-09 12:28:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:50.542191136 +0000 UTC m=+1343.102672187" watchObservedRunningTime="2025-12-09 12:28:50.574685788 +0000 UTC m=+1343.135166839" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.586727 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78687fd956-j7ckc" event={"ID":"fa1854f6-8032-4a50-8808-cbd83782deb5","Type":"ContainerStarted","Data":"03501ed88954d1dbe73855d73b16694db03343561749a0687af69c9bba624586"} Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.587222 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.587554 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.596098 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-565ff75dc9-922w2" event={"ID":"c2aa972b-8e19-46ea-b7e5-8b302c81dc0a","Type":"ContainerStarted","Data":"36d9b445b428a15b2a81ef66620b605e26bb31579f45ca3d33f5dcf36de453a7"} Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.624454 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" event={"ID":"e7bb9b20-9449-48b4-b3ba-dc547cf39558","Type":"ContainerStarted","Data":"00ec22a037c7090aacc78ca1476cb1a5e32349adce131b74a0d6351f8898568a"} Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.628794 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.637125 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78687fd956-j7ckc" podStartSLOduration=8.637068042 podStartE2EDuration="8.637068042s" podCreationTimestamp="2025-12-09 12:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:50.627229261 +0000 UTC m=+1343.187710322" watchObservedRunningTime="2025-12-09 12:28:50.637068042 +0000 UTC m=+1343.197549113" Dec 09 12:28:50 crc kubenswrapper[4970]: I1209 12:28:50.715700 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" podStartSLOduration=8.715678636 podStartE2EDuration="8.715678636s" podCreationTimestamp="2025-12-09 12:28:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:50.64491991 +0000 UTC m=+1343.205400961" watchObservedRunningTime="2025-12-09 12:28:50.715678636 +0000 UTC m=+1343.276159687" Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.272171 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-gzhss" podUID="0b1f388d-9583-482f-9089-714ba98cafff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.182:5353: i/o timeout" Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.641677 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35f7dc61-35aa-42a4-adc2-e96f07905cd0","Type":"ContainerStarted","Data":"371a1155897e81e46a278b04b0948efbb20ddde1b03332c7f16b608269f9b260"} Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.642027 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35f7dc61-35aa-42a4-adc2-e96f07905cd0","Type":"ContainerStarted","Data":"654f91d33fe33b1111abc302eac5953705bc2bf0cfdd87a9f0d7cf6a468c17e6"} Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.647482 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rxs6p" event={"ID":"973c0567-09c0-4313-8c9f-ee74a3188226","Type":"ContainerStarted","Data":"b71736d4c254616e4f33e65c60cb89829cf4499185a1a0679c62892db57458fe"} Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.650303 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-849mw" event={"ID":"43ba192b-df87-4028-a33e-4ff96d287644","Type":"ContainerStarted","Data":"d8beb4d4908bb9c21bcaddb9d73797d27da82ff3c5b23cf352fd6c416056603c"} Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.657710 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b28cb7f-3918-4f2d-ba79-21503540a126","Type":"ContainerStarted","Data":"4219a62cfad34d1ae3a86959f52cac5bb37112c707c4a6863f78678a997f452f"} Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.703140 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.703091937 podStartE2EDuration="4.703091937s" podCreationTimestamp="2025-12-09 12:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:51.672171187 +0000 UTC m=+1344.232652248" watchObservedRunningTime="2025-12-09 12:28:51.703091937 +0000 UTC m=+1344.263572988" Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.704620 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-rxs6p" podStartSLOduration=3.187211527 podStartE2EDuration="48.704607857s" podCreationTimestamp="2025-12-09 12:28:03 +0000 UTC" firstStartedPulling="2025-12-09 12:28:05.045272079 +0000 UTC m=+1297.605753130" lastFinishedPulling="2025-12-09 12:28:50.562668409 +0000 UTC m=+1343.123149460" observedRunningTime="2025-12-09 12:28:51.687268387 +0000 UTC m=+1344.247749438" watchObservedRunningTime="2025-12-09 12:28:51.704607857 +0000 UTC m=+1344.265088918" Dec 09 12:28:51 crc kubenswrapper[4970]: I1209 12:28:51.717332 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-849mw" podStartSLOduration=3.818379508 podStartE2EDuration="48.717313034s" podCreationTimestamp="2025-12-09 12:28:03 +0000 UTC" firstStartedPulling="2025-12-09 12:28:04.77235155 +0000 UTC m=+1297.332832601" lastFinishedPulling="2025-12-09 12:28:49.671285076 +0000 UTC m=+1342.231766127" observedRunningTime="2025-12-09 12:28:51.706120787 +0000 UTC m=+1344.266601838" watchObservedRunningTime="2025-12-09 12:28:51.717313034 +0000 UTC m=+1344.277794085" Dec 09 12:28:53 crc kubenswrapper[4970]: I1209 12:28:53.683517 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b28cb7f-3918-4f2d-ba79-21503540a126","Type":"ContainerStarted","Data":"7f76c34720835c9cf6966f4890b4c1b12dbdf9fa01d95785d94d81788022bbbc"} Dec 09 12:28:53 crc kubenswrapper[4970]: I1209 12:28:53.723511 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.7234885559999995 podStartE2EDuration="5.723488556s" podCreationTimestamp="2025-12-09 12:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:28:53.701072881 +0000 UTC m=+1346.261553932" watchObservedRunningTime="2025-12-09 12:28:53.723488556 +0000 UTC m=+1346.283969607" Dec 09 12:28:57 crc kubenswrapper[4970]: I1209 12:28:57.918383 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:28:57 crc kubenswrapper[4970]: I1209 12:28:57.972202 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-g9rp9"] Dec 09 12:28:57 crc kubenswrapper[4970]: I1209 12:28:57.983493 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-g9rp9" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="dnsmasq-dns" containerID="cri-o://237bed5f066c3969e56e804cbd43a5f00e6246c7dd941447688791485c5a2af8" gracePeriod=10 Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.359817 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.359865 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.402571 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.403037 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.434468 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-g9rp9" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.148:5353: connect: connection refused" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.743597 4970 generic.go:334] "Generic (PLEG): container finished" podID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerID="237bed5f066c3969e56e804cbd43a5f00e6246c7dd941447688791485c5a2af8" exitCode=0 Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.743748 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-g9rp9" event={"ID":"b37b8e53-92e0-47e4-a1c4-88e38ee775ff","Type":"ContainerDied","Data":"237bed5f066c3969e56e804cbd43a5f00e6246c7dd941447688791485c5a2af8"} Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.743953 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.743995 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.930805 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.930869 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.978853 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:28:58 crc kubenswrapper[4970]: I1209 12:28:58.982683 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:28:59 crc kubenswrapper[4970]: I1209 12:28:59.753975 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:28:59 crc kubenswrapper[4970]: I1209 12:28:59.754446 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.656706 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.714825 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-config\") pod \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.714891 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-sb\") pod \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.716081 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-nb\") pod \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.716160 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-dns-svc\") pod \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.716768 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7hxc\" (UniqueName: \"kubernetes.io/projected/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-kube-api-access-z7hxc\") pod \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\" (UID: \"b37b8e53-92e0-47e4-a1c4-88e38ee775ff\") " Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.725291 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-kube-api-access-z7hxc" (OuterVolumeSpecName: "kube-api-access-z7hxc") pod "b37b8e53-92e0-47e4-a1c4-88e38ee775ff" (UID: "b37b8e53-92e0-47e4-a1c4-88e38ee775ff"). InnerVolumeSpecName "kube-api-access-z7hxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.774411 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-g9rp9" event={"ID":"b37b8e53-92e0-47e4-a1c4-88e38ee775ff","Type":"ContainerDied","Data":"469228b93b3b2921c334a5ccc80e03d92ba3131a1eeb172046d87c8850f54dd6"} Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.774450 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-g9rp9" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.774886 4970 scope.go:117] "RemoveContainer" containerID="237bed5f066c3969e56e804cbd43a5f00e6246c7dd941447688791485c5a2af8" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.787779 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b37b8e53-92e0-47e4-a1c4-88e38ee775ff" (UID: "b37b8e53-92e0-47e4-a1c4-88e38ee775ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.797566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-config" (OuterVolumeSpecName: "config") pod "b37b8e53-92e0-47e4-a1c4-88e38ee775ff" (UID: "b37b8e53-92e0-47e4-a1c4-88e38ee775ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.799055 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b37b8e53-92e0-47e4-a1c4-88e38ee775ff" (UID: "b37b8e53-92e0-47e4-a1c4-88e38ee775ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.800147 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b37b8e53-92e0-47e4-a1c4-88e38ee775ff" (UID: "b37b8e53-92e0-47e4-a1c4-88e38ee775ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.820467 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.820506 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.820519 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.820527 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.820536 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7hxc\" (UniqueName: \"kubernetes.io/projected/b37b8e53-92e0-47e4-a1c4-88e38ee775ff-kube-api-access-z7hxc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:00 crc kubenswrapper[4970]: I1209 12:29:00.830570 4970 scope.go:117] "RemoveContainer" containerID="b5ca57ac5adf3e9a0ce62ad2145c7bc788728874bbb8dcab5322caab6839c85c" Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.109168 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-g9rp9"] Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.119995 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-g9rp9"] Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.791346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerStarted","Data":"042ab0c3c4cf998355153a4795e4a8678761a9af8cc371f90e30033e9d07bb30"} Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.791683 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.791493 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-central-agent" containerID="cri-o://154d1d716ce53b88f710b17a406709b049c15599df7a692dd117a408b3606212" gracePeriod=30 Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.791493 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="proxy-httpd" containerID="cri-o://042ab0c3c4cf998355153a4795e4a8678761a9af8cc371f90e30033e9d07bb30" gracePeriod=30 Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.791512 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="sg-core" containerID="cri-o://33712a05c52efb3923a7013db45c86dc0ff29b8573aec03ad49da2f4d190b612" gracePeriod=30 Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.791537 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-notification-agent" containerID="cri-o://648ba075c3f1667b000af1154a5dbb76a01a623bf4fd6dbdbb88289b5395a6b3" gracePeriod=30 Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.797898 4970 generic.go:334] "Generic (PLEG): container finished" podID="973c0567-09c0-4313-8c9f-ee74a3188226" containerID="b71736d4c254616e4f33e65c60cb89829cf4499185a1a0679c62892db57458fe" exitCode=0 Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.797953 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rxs6p" event={"ID":"973c0567-09c0-4313-8c9f-ee74a3188226","Type":"ContainerDied","Data":"b71736d4c254616e4f33e65c60cb89829cf4499185a1a0679c62892db57458fe"} Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.818606 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.902654299 podStartE2EDuration="58.818581457s" podCreationTimestamp="2025-12-09 12:28:03 +0000 UTC" firstStartedPulling="2025-12-09 12:28:05.749471413 +0000 UTC m=+1298.309952464" lastFinishedPulling="2025-12-09 12:29:00.665398571 +0000 UTC m=+1353.225879622" observedRunningTime="2025-12-09 12:29:01.816528463 +0000 UTC m=+1354.377009514" watchObservedRunningTime="2025-12-09 12:29:01.818581457 +0000 UTC m=+1354.379062508" Dec 09 12:29:01 crc kubenswrapper[4970]: I1209 12:29:01.830765 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" path="/var/lib/kubelet/pods/b37b8e53-92e0-47e4-a1c4-88e38ee775ff/volumes" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.099033 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.099323 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.105240 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.105395 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.113411 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.120978 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.829767 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerDied","Data":"042ab0c3c4cf998355153a4795e4a8678761a9af8cc371f90e30033e9d07bb30"} Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.829880 4970 generic.go:334] "Generic (PLEG): container finished" podID="eff0282f-3775-4904-8937-c9e16749e3e8" containerID="042ab0c3c4cf998355153a4795e4a8678761a9af8cc371f90e30033e9d07bb30" exitCode=0 Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.830208 4970 generic.go:334] "Generic (PLEG): container finished" podID="eff0282f-3775-4904-8937-c9e16749e3e8" containerID="33712a05c52efb3923a7013db45c86dc0ff29b8573aec03ad49da2f4d190b612" exitCode=2 Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.830227 4970 generic.go:334] "Generic (PLEG): container finished" podID="eff0282f-3775-4904-8937-c9e16749e3e8" containerID="154d1d716ce53b88f710b17a406709b049c15599df7a692dd117a408b3606212" exitCode=0 Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.830339 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerDied","Data":"33712a05c52efb3923a7013db45c86dc0ff29b8573aec03ad49da2f4d190b612"} Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.830362 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerDied","Data":"154d1d716ce53b88f710b17a406709b049c15599df7a692dd117a408b3606212"} Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.833000 4970 generic.go:334] "Generic (PLEG): container finished" podID="43ba192b-df87-4028-a33e-4ff96d287644" containerID="d8beb4d4908bb9c21bcaddb9d73797d27da82ff3c5b23cf352fd6c416056603c" exitCode=0 Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.833078 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-849mw" event={"ID":"43ba192b-df87-4028-a33e-4ff96d287644","Type":"ContainerDied","Data":"d8beb4d4908bb9c21bcaddb9d73797d27da82ff3c5b23cf352fd6c416056603c"} Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.836816 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kg6zl" event={"ID":"3015f85e-5d86-4906-9d9a-8389330bcb82","Type":"ContainerDied","Data":"a795b099d9de0c3443e1177afd2019e48a789a4b2d55d71ad9012c81cef0081c"} Dec 09 12:29:02 crc kubenswrapper[4970]: I1209 12:29:02.836785 4970 generic.go:334] "Generic (PLEG): container finished" podID="3015f85e-5d86-4906-9d9a-8389330bcb82" containerID="a795b099d9de0c3443e1177afd2019e48a789a4b2d55d71ad9012c81cef0081c" exitCode=0 Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.380014 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.492965 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsh4m\" (UniqueName: \"kubernetes.io/projected/973c0567-09c0-4313-8c9f-ee74a3188226-kube-api-access-bsh4m\") pod \"973c0567-09c0-4313-8c9f-ee74a3188226\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.496677 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-db-sync-config-data\") pod \"973c0567-09c0-4313-8c9f-ee74a3188226\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.496900 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-combined-ca-bundle\") pod \"973c0567-09c0-4313-8c9f-ee74a3188226\" (UID: \"973c0567-09c0-4313-8c9f-ee74a3188226\") " Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.514479 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973c0567-09c0-4313-8c9f-ee74a3188226-kube-api-access-bsh4m" (OuterVolumeSpecName: "kube-api-access-bsh4m") pod "973c0567-09c0-4313-8c9f-ee74a3188226" (UID: "973c0567-09c0-4313-8c9f-ee74a3188226"). InnerVolumeSpecName "kube-api-access-bsh4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.515023 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "973c0567-09c0-4313-8c9f-ee74a3188226" (UID: "973c0567-09c0-4313-8c9f-ee74a3188226"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.564111 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "973c0567-09c0-4313-8c9f-ee74a3188226" (UID: "973c0567-09c0-4313-8c9f-ee74a3188226"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.601456 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.601491 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsh4m\" (UniqueName: \"kubernetes.io/projected/973c0567-09c0-4313-8c9f-ee74a3188226-kube-api-access-bsh4m\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.601502 4970 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/973c0567-09c0-4313-8c9f-ee74a3188226-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.853774 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-rxs6p" Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.859653 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-rxs6p" event={"ID":"973c0567-09c0-4313-8c9f-ee74a3188226","Type":"ContainerDied","Data":"76cfc6b6de451104fa4881543e05f4e18f2a7d116bb09a4f2f53dfa30a4ca80d"} Dec 09 12:29:03 crc kubenswrapper[4970]: I1209 12:29:03.859708 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76cfc6b6de451104fa4881543e05f4e18f2a7d116bb09a4f2f53dfa30a4ca80d" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.055486 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7b7b55ddff-q4sdr"] Dec 09 12:29:04 crc kubenswrapper[4970]: E1209 12:29:04.056081 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="init" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.056098 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="init" Dec 09 12:29:04 crc kubenswrapper[4970]: E1209 12:29:04.056153 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973c0567-09c0-4313-8c9f-ee74a3188226" containerName="barbican-db-sync" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.056165 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="973c0567-09c0-4313-8c9f-ee74a3188226" containerName="barbican-db-sync" Dec 09 12:29:04 crc kubenswrapper[4970]: E1209 12:29:04.056188 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="dnsmasq-dns" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.056197 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="dnsmasq-dns" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.056496 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b37b8e53-92e0-47e4-a1c4-88e38ee775ff" containerName="dnsmasq-dns" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.056539 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="973c0567-09c0-4313-8c9f-ee74a3188226" containerName="barbican-db-sync" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.064390 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.075229 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ns52p" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.075484 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.075716 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.091807 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b7b55ddff-q4sdr"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.107700 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-b7745d654-pr4dm"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.110617 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.124499 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-combined-ca-bundle\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.124833 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-config-data\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.125018 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21d879fa-8dd8-4177-88c4-63e6a5689826-logs\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.125064 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfjvq\" (UniqueName: \"kubernetes.io/projected/21d879fa-8dd8-4177-88c4-63e6a5689826-kube-api-access-hfjvq\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.125136 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-config-data-custom\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.132489 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.137902 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b7745d654-pr4dm"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.198040 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9nhtn"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.200162 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238361 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21d879fa-8dd8-4177-88c4-63e6a5689826-logs\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238414 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-config-data-custom\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238447 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfjvq\" (UniqueName: \"kubernetes.io/projected/21d879fa-8dd8-4177-88c4-63e6a5689826-kube-api-access-hfjvq\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238493 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-config-data-custom\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238560 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-combined-ca-bundle\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238595 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-combined-ca-bundle\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238649 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxdt\" (UniqueName: \"kubernetes.io/projected/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-kube-api-access-zqxdt\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238721 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-logs\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238769 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-config-data\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.238793 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-config-data\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.239393 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21d879fa-8dd8-4177-88c4-63e6a5689826-logs\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.257986 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-config-data\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.267758 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9nhtn"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.271323 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-config-data-custom\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.295600 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21d879fa-8dd8-4177-88c4-63e6a5689826-combined-ca-bundle\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.305651 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfjvq\" (UniqueName: \"kubernetes.io/projected/21d879fa-8dd8-4177-88c4-63e6a5689826-kube-api-access-hfjvq\") pod \"barbican-worker-7b7b55ddff-q4sdr\" (UID: \"21d879fa-8dd8-4177-88c4-63e6a5689826\") " pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341663 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxdt\" (UniqueName: \"kubernetes.io/projected/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-kube-api-access-zqxdt\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341734 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341771 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-logs\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341827 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-config-data\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341847 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cxj\" (UniqueName: \"kubernetes.io/projected/5205a8c4-db65-475f-bc74-a8bece0fe6cc-kube-api-access-s9cxj\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341874 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341944 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-config-data-custom\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.341975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.342002 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-config\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.342024 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.342078 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-combined-ca-bundle\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.343543 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-logs\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.357527 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-combined-ca-bundle\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.360587 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-config-data\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.366089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-config-data-custom\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.378154 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-659c886f58-rf5rp"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.380499 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.383631 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.397156 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxdt\" (UniqueName: \"kubernetes.io/projected/4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a-kube-api-access-zqxdt\") pod \"barbican-keystone-listener-b7745d654-pr4dm\" (UID: \"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a\") " pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.414535 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b7b55ddff-q4sdr" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.444750 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-combined-ca-bundle\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445082 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445128 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-logs\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445197 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgsj\" (UniqueName: \"kubernetes.io/projected/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-kube-api-access-vjgsj\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445292 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9cxj\" (UniqueName: \"kubernetes.io/projected/5205a8c4-db65-475f-bc74-a8bece0fe6cc-kube-api-access-s9cxj\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445338 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445474 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data-custom\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445661 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.445715 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-config\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.446491 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.449234 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-config\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.449301 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-659c886f58-rf5rp"] Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.449354 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.449729 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.450017 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-svc\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.450183 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.452732 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.471477 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.489472 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9cxj\" (UniqueName: \"kubernetes.io/projected/5205a8c4-db65-475f-bc74-a8bece0fe6cc-kube-api-access-s9cxj\") pod \"dnsmasq-dns-85ff748b95-9nhtn\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.559446 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.559767 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-combined-ca-bundle\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.559805 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-logs\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.559841 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgsj\" (UniqueName: \"kubernetes.io/projected/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-kube-api-access-vjgsj\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.559911 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data-custom\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.563877 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-logs\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.565941 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.566893 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-combined-ca-bundle\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.567402 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data-custom\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.567696 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.598896 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgsj\" (UniqueName: \"kubernetes.io/projected/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-kube-api-access-vjgsj\") pod \"barbican-api-659c886f58-rf5rp\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.602167 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.621137 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kg6zl" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.765039 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk7r4\" (UniqueName: \"kubernetes.io/projected/3015f85e-5d86-4906-9d9a-8389330bcb82-kube-api-access-xk7r4\") pod \"3015f85e-5d86-4906-9d9a-8389330bcb82\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.765400 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-config-data\") pod \"3015f85e-5d86-4906-9d9a-8389330bcb82\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.765476 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-combined-ca-bundle\") pod \"3015f85e-5d86-4906-9d9a-8389330bcb82\" (UID: \"3015f85e-5d86-4906-9d9a-8389330bcb82\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.785736 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3015f85e-5d86-4906-9d9a-8389330bcb82-kube-api-access-xk7r4" (OuterVolumeSpecName: "kube-api-access-xk7r4") pod "3015f85e-5d86-4906-9d9a-8389330bcb82" (UID: "3015f85e-5d86-4906-9d9a-8389330bcb82"). InnerVolumeSpecName "kube-api-access-xk7r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.873271 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk7r4\" (UniqueName: \"kubernetes.io/projected/3015f85e-5d86-4906-9d9a-8389330bcb82-kube-api-access-xk7r4\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.875424 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-849mw" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.900385 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3015f85e-5d86-4906-9d9a-8389330bcb82" (UID: "3015f85e-5d86-4906-9d9a-8389330bcb82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.943101 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-kg6zl" event={"ID":"3015f85e-5d86-4906-9d9a-8389330bcb82","Type":"ContainerDied","Data":"527cf2d18f1dc29c2b421dd8e6bb41a0b4464ee71cd0243fab919e7c39452812"} Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.945329 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527cf2d18f1dc29c2b421dd8e6bb41a0b4464ee71cd0243fab919e7c39452812" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.945653 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.943560 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-kg6zl" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.947229 4970 generic.go:334] "Generic (PLEG): container finished" podID="eff0282f-3775-4904-8937-c9e16749e3e8" containerID="648ba075c3f1667b000af1154a5dbb76a01a623bf4fd6dbdbb88289b5395a6b3" exitCode=0 Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.947297 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerDied","Data":"648ba075c3f1667b000af1154a5dbb76a01a623bf4fd6dbdbb88289b5395a6b3"} Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.947326 4970 scope.go:117] "RemoveContainer" containerID="042ab0c3c4cf998355153a4795e4a8678761a9af8cc371f90e30033e9d07bb30" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.955207 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-849mw" event={"ID":"43ba192b-df87-4028-a33e-4ff96d287644","Type":"ContainerDied","Data":"293d5b11c323d46e73ace4bcb093adbbadd2e987bb6e601b84bdc4cc0abc9cdb"} Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.955239 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="293d5b11c323d46e73ace4bcb093adbbadd2e987bb6e601b84bdc4cc0abc9cdb" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.955308 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-849mw" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.974502 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43ba192b-df87-4028-a33e-4ff96d287644-etc-machine-id\") pod \"43ba192b-df87-4028-a33e-4ff96d287644\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.974565 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-scripts\") pod \"43ba192b-df87-4028-a33e-4ff96d287644\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.974748 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-db-sync-config-data\") pod \"43ba192b-df87-4028-a33e-4ff96d287644\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.974786 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwtdz\" (UniqueName: \"kubernetes.io/projected/43ba192b-df87-4028-a33e-4ff96d287644-kube-api-access-bwtdz\") pod \"43ba192b-df87-4028-a33e-4ff96d287644\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.974848 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-combined-ca-bundle\") pod \"43ba192b-df87-4028-a33e-4ff96d287644\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.974952 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-config-data\") pod \"43ba192b-df87-4028-a33e-4ff96d287644\" (UID: \"43ba192b-df87-4028-a33e-4ff96d287644\") " Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.979357 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43ba192b-df87-4028-a33e-4ff96d287644-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "43ba192b-df87-4028-a33e-4ff96d287644" (UID: "43ba192b-df87-4028-a33e-4ff96d287644"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:29:04 crc kubenswrapper[4970]: I1209 12:29:04.994551 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "43ba192b-df87-4028-a33e-4ff96d287644" (UID: "43ba192b-df87-4028-a33e-4ff96d287644"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.001706 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.001738 4970 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/43ba192b-df87-4028-a33e-4ff96d287644-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.001750 4970 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.012469 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-scripts" (OuterVolumeSpecName: "scripts") pod "43ba192b-df87-4028-a33e-4ff96d287644" (UID: "43ba192b-df87-4028-a33e-4ff96d287644"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.031819 4970 scope.go:117] "RemoveContainer" containerID="33712a05c52efb3923a7013db45c86dc0ff29b8573aec03ad49da2f4d190b612" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.082507 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43ba192b-df87-4028-a33e-4ff96d287644-kube-api-access-bwtdz" (OuterVolumeSpecName: "kube-api-access-bwtdz") pod "43ba192b-df87-4028-a33e-4ff96d287644" (UID: "43ba192b-df87-4028-a33e-4ff96d287644"). InnerVolumeSpecName "kube-api-access-bwtdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.103431 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-run-httpd\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.103678 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-config-data\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.103803 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-log-httpd\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.103964 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-combined-ca-bundle\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.104084 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-scripts\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.104189 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-sg-core-conf-yaml\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.104317 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.104493 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v997w\" (UniqueName: \"kubernetes.io/projected/eff0282f-3775-4904-8937-c9e16749e3e8-kube-api-access-v997w\") pod \"eff0282f-3775-4904-8937-c9e16749e3e8\" (UID: \"eff0282f-3775-4904-8937-c9e16749e3e8\") " Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.105402 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwtdz\" (UniqueName: \"kubernetes.io/projected/43ba192b-df87-4028-a33e-4ff96d287644-kube-api-access-bwtdz\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.105488 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.105563 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.111606 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.114763 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "43ba192b-df87-4028-a33e-4ff96d287644" (UID: "43ba192b-df87-4028-a33e-4ff96d287644"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.139402 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-scripts" (OuterVolumeSpecName: "scripts") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.139557 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff0282f-3775-4904-8937-c9e16749e3e8-kube-api-access-v997w" (OuterVolumeSpecName: "kube-api-access-v997w") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "kube-api-access-v997w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.170412 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-config-data" (OuterVolumeSpecName: "config-data") pod "3015f85e-5d86-4906-9d9a-8389330bcb82" (UID: "3015f85e-5d86-4906-9d9a-8389330bcb82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.187477 4970 scope.go:117] "RemoveContainer" containerID="648ba075c3f1667b000af1154a5dbb76a01a623bf4fd6dbdbb88289b5395a6b3" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.217982 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v997w\" (UniqueName: \"kubernetes.io/projected/eff0282f-3775-4904-8937-c9e16749e3e8-kube-api-access-v997w\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.218008 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eff0282f-3775-4904-8937-c9e16749e3e8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.218017 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.218026 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.218035 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3015f85e-5d86-4906-9d9a-8389330bcb82-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.321356 4970 scope.go:117] "RemoveContainer" containerID="154d1d716ce53b88f710b17a406709b049c15599df7a692dd117a408b3606212" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.329808 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.363777 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.428407 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-config-data" (OuterVolumeSpecName: "config-data") pod "43ba192b-df87-4028-a33e-4ff96d287644" (UID: "43ba192b-df87-4028-a33e-4ff96d287644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.466324 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/43ba192b-df87-4028-a33e-4ff96d287644-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.487730 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.518379 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-config-data" (OuterVolumeSpecName: "config-data") pod "eff0282f-3775-4904-8937-c9e16749e3e8" (UID: "eff0282f-3775-4904-8937-c9e16749e3e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:05 crc kubenswrapper[4970]: W1209 12:29:05.555175 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21d879fa_8dd8_4177_88c4_63e6a5689826.slice/crio-1a3dbc8b0c253f0fbfff848854bdd97790899c7c3f0b558a28f67a3770458538 WatchSource:0}: Error finding container 1a3dbc8b0c253f0fbfff848854bdd97790899c7c3f0b558a28f67a3770458538: Status 404 returned error can't find the container with id 1a3dbc8b0c253f0fbfff848854bdd97790899c7c3f0b558a28f67a3770458538 Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.568411 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.568448 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff0282f-3775-4904-8937-c9e16749e3e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.582328 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b7b55ddff-q4sdr"] Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.978480 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"eff0282f-3775-4904-8937-c9e16749e3e8","Type":"ContainerDied","Data":"9c5eacfba49bb35ae3bc5c7ff52f795d0eeaea30c039f0d1dce2a6974fcbdd2e"} Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.978811 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:05 crc kubenswrapper[4970]: I1209 12:29:05.984080 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b7b55ddff-q4sdr" event={"ID":"21d879fa-8dd8-4177-88c4-63e6a5689826","Type":"ContainerStarted","Data":"1a3dbc8b0c253f0fbfff848854bdd97790899c7c3f0b558a28f67a3770458538"} Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.042504 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-b7745d654-pr4dm"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.084945 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9nhtn"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.126925 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.178123 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.195920 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: E1209 12:29:06.196698 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-notification-agent" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.196719 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-notification-agent" Dec 09 12:29:06 crc kubenswrapper[4970]: E1209 12:29:06.196759 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-central-agent" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.196767 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-central-agent" Dec 09 12:29:06 crc kubenswrapper[4970]: E1209 12:29:06.196787 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43ba192b-df87-4028-a33e-4ff96d287644" containerName="cinder-db-sync" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.196794 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="43ba192b-df87-4028-a33e-4ff96d287644" containerName="cinder-db-sync" Dec 09 12:29:06 crc kubenswrapper[4970]: E1209 12:29:06.196812 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="proxy-httpd" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.196819 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="proxy-httpd" Dec 09 12:29:06 crc kubenswrapper[4970]: E1209 12:29:06.196836 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="sg-core" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.196843 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="sg-core" Dec 09 12:29:06 crc kubenswrapper[4970]: E1209 12:29:06.196853 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3015f85e-5d86-4906-9d9a-8389330bcb82" containerName="heat-db-sync" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.196860 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3015f85e-5d86-4906-9d9a-8389330bcb82" containerName="heat-db-sync" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.197092 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3015f85e-5d86-4906-9d9a-8389330bcb82" containerName="heat-db-sync" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.197116 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-central-agent" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.197139 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="43ba192b-df87-4028-a33e-4ff96d287644" containerName="cinder-db-sync" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.197150 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="proxy-httpd" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.197162 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="sg-core" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.197175 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" containerName="ceilometer-notification-agent" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.200013 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.219076 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.219531 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.220146 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.241374 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-659c886f58-rf5rp"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.305318 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.322327 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.329207 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ghltv" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.329583 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.329752 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.330726 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.361314 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.436869 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437015 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-run-httpd\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437052 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zglf\" (UniqueName: \"kubernetes.io/projected/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-kube-api-access-9zglf\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437094 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437117 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-log-httpd\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437137 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-scripts\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437175 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f7h\" (UniqueName: \"kubernetes.io/projected/266fe773-afd8-4703-9180-f325242bc850-kube-api-access-q4f7h\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437201 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437238 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-scripts\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437381 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-config-data\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437417 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437465 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/266fe773-afd8-4703-9180-f325242bc850-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.437496 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.450300 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9nhtn"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.488353 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7hh4d"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.490857 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.498058 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7hh4d"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.524362 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.534873 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.536594 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542014 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-run-httpd\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542056 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zglf\" (UniqueName: \"kubernetes.io/projected/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-kube-api-access-9zglf\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542093 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542112 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-log-httpd\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542130 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-scripts\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542159 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4f7h\" (UniqueName: \"kubernetes.io/projected/266fe773-afd8-4703-9180-f325242bc850-kube-api-access-q4f7h\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542173 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542201 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-scripts\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542227 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-config-data\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542324 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/266fe773-afd8-4703-9180-f325242bc850-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542386 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.542406 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.543436 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-run-httpd\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.544590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-log-httpd\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.545225 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/266fe773-afd8-4703-9180-f325242bc850-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.548683 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.549214 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.551610 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-config-data\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.552517 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-scripts\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.552525 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.553640 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-scripts\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.553700 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.558995 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.559566 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.569262 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4f7h\" (UniqueName: \"kubernetes.io/projected/266fe773-afd8-4703-9180-f325242bc850-kube-api-access-q4f7h\") pod \"cinder-scheduler-0\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.572968 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zglf\" (UniqueName: \"kubernetes.io/projected/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-kube-api-access-9zglf\") pod \"ceilometer-0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.617879 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644304 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6513938a-4c2f-4751-9c72-3fafc20c5dc1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644368 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644402 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-scripts\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644452 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644508 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cgbr\" (UniqueName: \"kubernetes.io/projected/6513938a-4c2f-4751-9c72-3fafc20c5dc1-kube-api-access-4cgbr\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644548 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644580 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644625 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644652 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644671 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6513938a-4c2f-4751-9c72-3fafc20c5dc1-logs\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644698 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-config\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644803 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data-custom\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.644913 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qxrr\" (UniqueName: \"kubernetes.io/projected/5179fa9e-e5d0-4665-8356-ca6026fe2d64-kube-api-access-2qxrr\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.651739 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.749781 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qxrr\" (UniqueName: \"kubernetes.io/projected/5179fa9e-e5d0-4665-8356-ca6026fe2d64-kube-api-access-2qxrr\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.749889 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6513938a-4c2f-4751-9c72-3fafc20c5dc1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.749928 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.749964 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-scripts\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.749993 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6513938a-4c2f-4751-9c72-3fafc20c5dc1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750053 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cgbr\" (UniqueName: \"kubernetes.io/projected/6513938a-4c2f-4751-9c72-3fafc20c5dc1-kube-api-access-4cgbr\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750261 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750306 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750367 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750410 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750443 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6513938a-4c2f-4751-9c72-3fafc20c5dc1-logs\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750471 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-config\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.750500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data-custom\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.751292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.752068 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.752165 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.752822 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6513938a-4c2f-4751-9c72-3fafc20c5dc1-logs\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.753409 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.754102 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-config\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.771814 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.781866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cgbr\" (UniqueName: \"kubernetes.io/projected/6513938a-4c2f-4751-9c72-3fafc20c5dc1-kube-api-access-4cgbr\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.783315 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qxrr\" (UniqueName: \"kubernetes.io/projected/5179fa9e-e5d0-4665-8356-ca6026fe2d64-kube-api-access-2qxrr\") pod \"dnsmasq-dns-5c9776ccc5-7hh4d\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.787476 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data-custom\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.788206 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.790692 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-scripts\") pod \"cinder-api-0\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " pod="openstack/cinder-api-0" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.965020 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:06 crc kubenswrapper[4970]: I1209 12:29:06.980750 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.018140 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" event={"ID":"5205a8c4-db65-475f-bc74-a8bece0fe6cc","Type":"ContainerStarted","Data":"d58e474f753ff2115665fc26aa94d84a30f6d94fc115b2c89f0af7106f7a3e2f"} Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.020658 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" event={"ID":"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a","Type":"ContainerStarted","Data":"5e2a7d90d3f83a695671b48e13b13bb0d03396e7c968f06afc73e337a9908559"} Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.021781 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659c886f58-rf5rp" event={"ID":"f85130f1-2801-40b1-b1d2-3cfe0b6aee57","Type":"ContainerStarted","Data":"f789c9ba5d5a0f9aa3301c911112594e569bc3b6176181a79d1f79cb4f794202"} Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.435277 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:07 crc kubenswrapper[4970]: W1209 12:29:07.536928 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2a60c14_d830_48c0_ab9e_d66e1ad768b0.slice/crio-8deef2b1e60d5a2badb2d9ec3b1f345580b217c62026728ec02a5f70835d88c4 WatchSource:0}: Error finding container 8deef2b1e60d5a2badb2d9ec3b1f345580b217c62026728ec02a5f70835d88c4: Status 404 returned error can't find the container with id 8deef2b1e60d5a2badb2d9ec3b1f345580b217c62026728ec02a5f70835d88c4 Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.539464 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.804372 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7hh4d"] Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.827934 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff0282f-3775-4904-8937-c9e16749e3e8" path="/var/lib/kubelet/pods/eff0282f-3775-4904-8937-c9e16749e3e8/volumes" Dec 09 12:29:07 crc kubenswrapper[4970]: I1209 12:29:07.923203 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.034701 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659c886f58-rf5rp" event={"ID":"f85130f1-2801-40b1-b1d2-3cfe0b6aee57","Type":"ContainerStarted","Data":"9d76518a82e06dc234546f740a382a793257f3a107e2a2e937e90f24de4fe6a9"} Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.036586 4970 generic.go:334] "Generic (PLEG): container finished" podID="5205a8c4-db65-475f-bc74-a8bece0fe6cc" containerID="83f80196b10527906d1cbf274b409379e325b84df4f2aa5fff30254b7a201690" exitCode=0 Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.036653 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" event={"ID":"5205a8c4-db65-475f-bc74-a8bece0fe6cc","Type":"ContainerDied","Data":"83f80196b10527906d1cbf274b409379e325b84df4f2aa5fff30254b7a201690"} Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.038611 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerStarted","Data":"8deef2b1e60d5a2badb2d9ec3b1f345580b217c62026728ec02a5f70835d88c4"} Dec 09 12:29:08 crc kubenswrapper[4970]: W1209 12:29:08.112201 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6513938a_4c2f_4751_9c72_3fafc20c5dc1.slice/crio-d9f14891295bc9e9e5696d0e2d9d6b4079b2f05661f5d7bc19cc3cdf41278575 WatchSource:0}: Error finding container d9f14891295bc9e9e5696d0e2d9d6b4079b2f05661f5d7bc19cc3cdf41278575: Status 404 returned error can't find the container with id d9f14891295bc9e9e5696d0e2d9d6b4079b2f05661f5d7bc19cc3cdf41278575 Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.815636 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.932171 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-config\") pod \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.932480 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-sb\") pod \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.932527 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-nb\") pod \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.932552 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-svc\") pod \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.932579 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-swift-storage-0\") pod \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.932635 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9cxj\" (UniqueName: \"kubernetes.io/projected/5205a8c4-db65-475f-bc74-a8bece0fe6cc-kube-api-access-s9cxj\") pod \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\" (UID: \"5205a8c4-db65-475f-bc74-a8bece0fe6cc\") " Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.936949 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5205a8c4-db65-475f-bc74-a8bece0fe6cc-kube-api-access-s9cxj" (OuterVolumeSpecName: "kube-api-access-s9cxj") pod "5205a8c4-db65-475f-bc74-a8bece0fe6cc" (UID: "5205a8c4-db65-475f-bc74-a8bece0fe6cc"). InnerVolumeSpecName "kube-api-access-s9cxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.962734 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-config" (OuterVolumeSpecName: "config") pod "5205a8c4-db65-475f-bc74-a8bece0fe6cc" (UID: "5205a8c4-db65-475f-bc74-a8bece0fe6cc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:08 crc kubenswrapper[4970]: I1209 12:29:08.985308 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5205a8c4-db65-475f-bc74-a8bece0fe6cc" (UID: "5205a8c4-db65-475f-bc74-a8bece0fe6cc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.001027 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5205a8c4-db65-475f-bc74-a8bece0fe6cc" (UID: "5205a8c4-db65-475f-bc74-a8bece0fe6cc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.013124 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5205a8c4-db65-475f-bc74-a8bece0fe6cc" (UID: "5205a8c4-db65-475f-bc74-a8bece0fe6cc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.031572 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5205a8c4-db65-475f-bc74-a8bece0fe6cc" (UID: "5205a8c4-db65-475f-bc74-a8bece0fe6cc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.035110 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.035137 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.035150 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.035162 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.035174 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9cxj\" (UniqueName: \"kubernetes.io/projected/5205a8c4-db65-475f-bc74-a8bece0fe6cc-kube-api-access-s9cxj\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.035186 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5205a8c4-db65-475f-bc74-a8bece0fe6cc-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.060172 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6513938a-4c2f-4751-9c72-3fafc20c5dc1","Type":"ContainerStarted","Data":"d9f14891295bc9e9e5696d0e2d9d6b4079b2f05661f5d7bc19cc3cdf41278575"} Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.064930 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" event={"ID":"5205a8c4-db65-475f-bc74-a8bece0fe6cc","Type":"ContainerDied","Data":"d58e474f753ff2115665fc26aa94d84a30f6d94fc115b2c89f0af7106f7a3e2f"} Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.064967 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-9nhtn" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.064997 4970 scope.go:117] "RemoveContainer" containerID="83f80196b10527906d1cbf274b409379e325b84df4f2aa5fff30254b7a201690" Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.070569 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" event={"ID":"5179fa9e-e5d0-4665-8356-ca6026fe2d64","Type":"ContainerStarted","Data":"e296d3378dece591bdb82103197c2e3648196eeec162abceca0ab3358c7cecd4"} Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.073372 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"266fe773-afd8-4703-9180-f325242bc850","Type":"ContainerStarted","Data":"162663de1225f0823c76c454733783754fc8a463dde57f3a8dc3ab045edc887f"} Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.227828 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9nhtn"] Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.243606 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-9nhtn"] Dec 09 12:29:09 crc kubenswrapper[4970]: I1209 12:29:09.827996 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5205a8c4-db65-475f-bc74-a8bece0fe6cc" path="/var/lib/kubelet/pods/5205a8c4-db65-475f-bc74-a8bece0fe6cc/volumes" Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.085236 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b7b55ddff-q4sdr" event={"ID":"21d879fa-8dd8-4177-88c4-63e6a5689826","Type":"ContainerStarted","Data":"fe964c150ae3597ac4fc84f5bd5cad10cb41163b030dca8b0ac60acf4c4a10c5"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.085287 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b7b55ddff-q4sdr" event={"ID":"21d879fa-8dd8-4177-88c4-63e6a5689826","Type":"ContainerStarted","Data":"3e158c93599ca1efba55c98a2c0286f2d8e0b8fa3a0732696436e5ae0bde3a36"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.087695 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" event={"ID":"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a","Type":"ContainerStarted","Data":"b98c45c91b6ea843b57dfcbd384c0261d8caf9385aae0ca49513c26f55ecdfe2"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.087794 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" event={"ID":"4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a","Type":"ContainerStarted","Data":"75bfef8a2a99d4f6b94ffe8680732e44b54bc646381afdea2bc77e5f98f81782"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.089735 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659c886f58-rf5rp" event={"ID":"f85130f1-2801-40b1-b1d2-3cfe0b6aee57","Type":"ContainerStarted","Data":"2574a2f04562eee00b2f34a5279868eea923424d941c81e70f8bf3311273d595"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.089860 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.092048 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6513938a-4c2f-4751-9c72-3fafc20c5dc1","Type":"ContainerStarted","Data":"a0ef3721d3eadfa00812b693bda48d0fef4c695e7d1f7c875be6012c394ca1a6"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.101661 4970 generic.go:334] "Generic (PLEG): container finished" podID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerID="cab81b9c3a5afd717961bb6a2b5497c5a1a2a053f49609ee339d70a88dbb6657" exitCode=0 Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.102346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" event={"ID":"5179fa9e-e5d0-4665-8356-ca6026fe2d64","Type":"ContainerDied","Data":"cab81b9c3a5afd717961bb6a2b5497c5a1a2a053f49609ee339d70a88dbb6657"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.107025 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerStarted","Data":"981ff28711a9d13ecdcb4b17d9c5aea021cb200bde0bdde4fddc84731851afb3"} Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.147461 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7b7b55ddff-q4sdr" podStartSLOduration=3.785283652 podStartE2EDuration="7.147438718s" podCreationTimestamp="2025-12-09 12:29:03 +0000 UTC" firstStartedPulling="2025-12-09 12:29:05.569954108 +0000 UTC m=+1358.130435159" lastFinishedPulling="2025-12-09 12:29:08.932109174 +0000 UTC m=+1361.492590225" observedRunningTime="2025-12-09 12:29:10.135589644 +0000 UTC m=+1362.696070695" watchObservedRunningTime="2025-12-09 12:29:10.147438718 +0000 UTC m=+1362.707919759" Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.237631 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-b7745d654-pr4dm" podStartSLOduration=3.334311474 podStartE2EDuration="6.23761089s" podCreationTimestamp="2025-12-09 12:29:04 +0000 UTC" firstStartedPulling="2025-12-09 12:29:06.029006354 +0000 UTC m=+1358.589487415" lastFinishedPulling="2025-12-09 12:29:08.93230577 +0000 UTC m=+1361.492786831" observedRunningTime="2025-12-09 12:29:10.205728785 +0000 UTC m=+1362.766209826" watchObservedRunningTime="2025-12-09 12:29:10.23761089 +0000 UTC m=+1362.798091931" Dec 09 12:29:10 crc kubenswrapper[4970]: I1209 12:29:10.348760 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-659c886f58-rf5rp" podStartSLOduration=6.348739398 podStartE2EDuration="6.348739398s" podCreationTimestamp="2025-12-09 12:29:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:10.288822529 +0000 UTC m=+1362.849303580" watchObservedRunningTime="2025-12-09 12:29:10.348739398 +0000 UTC m=+1362.909220449" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.092693 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.142744 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" event={"ID":"5179fa9e-e5d0-4665-8356-ca6026fe2d64","Type":"ContainerStarted","Data":"2c1629959448a932d8c5f58b5a90152025aec271a215042fdb9fc1828c3dc759"} Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.143083 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.167354 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerStarted","Data":"8192987060e21cf2ff856a260680cd25061367624c144b36669013a836276c92"} Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.173326 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"266fe773-afd8-4703-9180-f325242bc850","Type":"ContainerStarted","Data":"0b694e1a163e658d09bb45ac95e0347048bda76e2b601ec03add003bcd297d6e"} Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.174279 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.177918 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" podStartSLOduration=5.17789498 podStartE2EDuration="5.17789498s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:11.167518055 +0000 UTC m=+1363.727999106" watchObservedRunningTime="2025-12-09 12:29:11.17789498 +0000 UTC m=+1363.738376031" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.756058 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c8c76f86b-vkq6w"] Dec 09 12:29:11 crc kubenswrapper[4970]: E1209 12:29:11.757129 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5205a8c4-db65-475f-bc74-a8bece0fe6cc" containerName="init" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.757149 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5205a8c4-db65-475f-bc74-a8bece0fe6cc" containerName="init" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.757367 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="5205a8c4-db65-475f-bc74-a8bece0fe6cc" containerName="init" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.759851 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.762895 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.763278 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.774746 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c8c76f86b-vkq6w"] Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.923808 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-config-data-custom\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.923878 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-logs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.924011 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kscl\" (UniqueName: \"kubernetes.io/projected/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-kube-api-access-7kscl\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.924032 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-internal-tls-certs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.924069 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-public-tls-certs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.924097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-combined-ca-bundle\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:11 crc kubenswrapper[4970]: I1209 12:29:11.924167 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-config-data\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025673 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-config-data-custom\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-logs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025842 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kscl\" (UniqueName: \"kubernetes.io/projected/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-kube-api-access-7kscl\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025867 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-internal-tls-certs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025901 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-public-tls-certs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025927 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-combined-ca-bundle\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.025995 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-config-data\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.029402 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-logs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.031304 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-config-data\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.033927 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-config-data-custom\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.041571 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-internal-tls-certs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.048039 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-combined-ca-bundle\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.048910 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-public-tls-certs\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.059986 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kscl\" (UniqueName: \"kubernetes.io/projected/0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11-kube-api-access-7kscl\") pod \"barbican-api-7c8c76f86b-vkq6w\" (UID: \"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11\") " pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.083809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.200333 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6513938a-4c2f-4751-9c72-3fafc20c5dc1","Type":"ContainerStarted","Data":"498227165cecc483621aa3ee25964f7529ce7fc7289e49e6fc391b7482eb65ae"} Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.200493 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api-log" containerID="cri-o://a0ef3721d3eadfa00812b693bda48d0fef4c695e7d1f7c875be6012c394ca1a6" gracePeriod=30 Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.200574 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.200960 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api" containerID="cri-o://498227165cecc483621aa3ee25964f7529ce7fc7289e49e6fc391b7482eb65ae" gracePeriod=30 Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.210176 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"266fe773-afd8-4703-9180-f325242bc850","Type":"ContainerStarted","Data":"dbb38a47478ec9b2bf950fab5d94532e42e95519680e68965d217382d76aa793"} Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.238791 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.238773308 podStartE2EDuration="6.238773308s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:12.231417723 +0000 UTC m=+1364.791898774" watchObservedRunningTime="2025-12-09 12:29:12.238773308 +0000 UTC m=+1364.799254359" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.273586 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.305678439 podStartE2EDuration="6.27356s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="2025-12-09 12:29:08.111782537 +0000 UTC m=+1360.672263588" lastFinishedPulling="2025-12-09 12:29:09.079664098 +0000 UTC m=+1361.640145149" observedRunningTime="2025-12-09 12:29:12.252705357 +0000 UTC m=+1364.813186408" watchObservedRunningTime="2025-12-09 12:29:12.27356 +0000 UTC m=+1364.834041051" Dec 09 12:29:12 crc kubenswrapper[4970]: I1209 12:29:12.753176 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c8c76f86b-vkq6w"] Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.280500 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.382553 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerStarted","Data":"686b2bcf0b8fb6b546d34bb72b7518713b07abea4fe73d95e3f92a7d6fc59c38"} Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.384701 4970 generic.go:334] "Generic (PLEG): container finished" podID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerID="498227165cecc483621aa3ee25964f7529ce7fc7289e49e6fc391b7482eb65ae" exitCode=0 Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.384727 4970 generic.go:334] "Generic (PLEG): container finished" podID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerID="a0ef3721d3eadfa00812b693bda48d0fef4c695e7d1f7c875be6012c394ca1a6" exitCode=143 Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.384767 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6513938a-4c2f-4751-9c72-3fafc20c5dc1","Type":"ContainerDied","Data":"498227165cecc483621aa3ee25964f7529ce7fc7289e49e6fc391b7482eb65ae"} Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.384791 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6513938a-4c2f-4751-9c72-3fafc20c5dc1","Type":"ContainerDied","Data":"a0ef3721d3eadfa00812b693bda48d0fef4c695e7d1f7c875be6012c394ca1a6"} Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.399589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c8c76f86b-vkq6w" event={"ID":"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11","Type":"ContainerStarted","Data":"18f7504726d774820434d925d8a4b8a4d2643e2123888d531830ffe7856f6bab"} Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.653743 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.780933 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cgbr\" (UniqueName: \"kubernetes.io/projected/6513938a-4c2f-4751-9c72-3fafc20c5dc1-kube-api-access-4cgbr\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.780993 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6513938a-4c2f-4751-9c72-3fafc20c5dc1-logs\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.781279 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6513938a-4c2f-4751-9c72-3fafc20c5dc1-etc-machine-id\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.781338 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data-custom\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.781462 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.781715 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-scripts\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.781753 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-combined-ca-bundle\") pod \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\" (UID: \"6513938a-4c2f-4751-9c72-3fafc20c5dc1\") " Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.782397 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6513938a-4c2f-4751-9c72-3fafc20c5dc1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.782875 4970 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6513938a-4c2f-4751-9c72-3fafc20c5dc1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.785881 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6513938a-4c2f-4751-9c72-3fafc20c5dc1-logs" (OuterVolumeSpecName: "logs") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.792400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6513938a-4c2f-4751-9c72-3fafc20c5dc1-kube-api-access-4cgbr" (OuterVolumeSpecName: "kube-api-access-4cgbr") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "kube-api-access-4cgbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.792520 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-scripts" (OuterVolumeSpecName: "scripts") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.799558 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.885399 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.885645 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cgbr\" (UniqueName: \"kubernetes.io/projected/6513938a-4c2f-4751-9c72-3fafc20c5dc1-kube-api-access-4cgbr\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.885656 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6513938a-4c2f-4751-9c72-3fafc20c5dc1-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.885664 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.900934 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.934310 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data" (OuterVolumeSpecName: "config-data") pod "6513938a-4c2f-4751-9c72-3fafc20c5dc1" (UID: "6513938a-4c2f-4751-9c72-3fafc20c5dc1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.988036 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:13 crc kubenswrapper[4970]: I1209 12:29:13.988072 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6513938a-4c2f-4751-9c72-3fafc20c5dc1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.410295 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"6513938a-4c2f-4751-9c72-3fafc20c5dc1","Type":"ContainerDied","Data":"d9f14891295bc9e9e5696d0e2d9d6b4079b2f05661f5d7bc19cc3cdf41278575"} Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.410346 4970 scope.go:117] "RemoveContainer" containerID="498227165cecc483621aa3ee25964f7529ce7fc7289e49e6fc391b7482eb65ae" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.410466 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.429674 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c8c76f86b-vkq6w" event={"ID":"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11","Type":"ContainerStarted","Data":"ee58587df91098a97cea3781763fecdc4f154e130c7f1a975d42c7f28346d6ea"} Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.430000 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c8c76f86b-vkq6w" event={"ID":"0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11","Type":"ContainerStarted","Data":"30ba979a2f8adf9f073f4b64ac1f5958436386bcabc44d61ffea4953a485afb2"} Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.430055 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.430084 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.446019 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerStarted","Data":"da70dbe51a82df07c19619303502d069f21293c6e8895924a98327bac64b14e9"} Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.446709 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.459294 4970 scope.go:117] "RemoveContainer" containerID="a0ef3721d3eadfa00812b693bda48d0fef4c695e7d1f7c875be6012c394ca1a6" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.466321 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c8c76f86b-vkq6w" podStartSLOduration=3.466300031 podStartE2EDuration="3.466300031s" podCreationTimestamp="2025-12-09 12:29:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:14.448592801 +0000 UTC m=+1367.009073852" watchObservedRunningTime="2025-12-09 12:29:14.466300031 +0000 UTC m=+1367.026781082" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.509388 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.519388 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.530992 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.257750093 podStartE2EDuration="8.530950976s" podCreationTimestamp="2025-12-09 12:29:06 +0000 UTC" firstStartedPulling="2025-12-09 12:29:08.070479711 +0000 UTC m=+1360.630960762" lastFinishedPulling="2025-12-09 12:29:13.343680594 +0000 UTC m=+1365.904161645" observedRunningTime="2025-12-09 12:29:14.491636633 +0000 UTC m=+1367.052117674" watchObservedRunningTime="2025-12-09 12:29:14.530950976 +0000 UTC m=+1367.091432027" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.557300 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:14 crc kubenswrapper[4970]: E1209 12:29:14.557764 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api-log" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.557777 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api-log" Dec 09 12:29:14 crc kubenswrapper[4970]: E1209 12:29:14.557802 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.557809 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.558013 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.558027 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" containerName="cinder-api-log" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.559260 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.569465 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.569670 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.569820 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.572730 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.607953 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-config-data-custom\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608013 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/670bed16-1df0-4568-9305-886c7ec7a4f5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608064 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608088 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670bed16-1df0-4568-9305-886c7ec7a4f5-logs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608120 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-config-data\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608138 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-scripts\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608151 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608198 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.608222 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtjxz\" (UniqueName: \"kubernetes.io/projected/670bed16-1df0-4568-9305-886c7ec7a4f5-kube-api-access-jtjxz\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.711856 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/670bed16-1df0-4568-9305-886c7ec7a4f5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.712659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.712857 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670bed16-1df0-4568-9305-886c7ec7a4f5-logs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.713004 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/670bed16-1df0-4568-9305-886c7ec7a4f5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.713438 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-config-data\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.713590 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-scripts\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.713731 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.714426 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.714966 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtjxz\" (UniqueName: \"kubernetes.io/projected/670bed16-1df0-4568-9305-886c7ec7a4f5-kube-api-access-jtjxz\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.716006 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-config-data-custom\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.719332 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/670bed16-1df0-4568-9305-886c7ec7a4f5-logs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.754528 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-scripts\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.756771 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-config-data\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.759845 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtjxz\" (UniqueName: \"kubernetes.io/projected/670bed16-1df0-4568-9305-886c7ec7a4f5-kube-api-access-jtjxz\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.762017 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-config-data-custom\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.762883 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.771898 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.773232 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/670bed16-1df0-4568-9305-886c7ec7a4f5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"670bed16-1df0-4568-9305-886c7ec7a4f5\") " pod="openstack/cinder-api-0" Dec 09 12:29:14 crc kubenswrapper[4970]: I1209 12:29:14.926769 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.227556 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dd4c8c86f-7zktb" Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.368423 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d7d95f968-k2gh2"] Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.376702 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d7d95f968-k2gh2" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-api" containerID="cri-o://aba0aaf5bec05a42ffca748467a7096415b9012cde958ccc33c9498e506b800f" gracePeriod=30 Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.380028 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6d7d95f968-k2gh2" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-httpd" containerID="cri-o://4685b9c897e25961f58f4586a2048dd22f900e813863a1a95b7dffbbc9273650" gracePeriod=30 Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.492670 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 12:29:15 crc kubenswrapper[4970]: W1209 12:29:15.517930 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod670bed16_1df0_4568_9305_886c7ec7a4f5.slice/crio-3ae9ac5bdfdec0c34b1b740a4a39d31bc04543392f8984fefc6fbd37ec89ac91 WatchSource:0}: Error finding container 3ae9ac5bdfdec0c34b1b740a4a39d31bc04543392f8984fefc6fbd37ec89ac91: Status 404 returned error can't find the container with id 3ae9ac5bdfdec0c34b1b740a4a39d31bc04543392f8984fefc6fbd37ec89ac91 Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.560975 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.564976 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78687fd956-j7ckc" Dec 09 12:29:15 crc kubenswrapper[4970]: I1209 12:29:15.911953 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6513938a-4c2f-4751-9c72-3fafc20c5dc1" path="/var/lib/kubelet/pods/6513938a-4c2f-4751-9c72-3fafc20c5dc1/volumes" Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.487412 4970 generic.go:334] "Generic (PLEG): container finished" podID="7d34e443-c923-42f0-83be-3d060424380b" containerID="4685b9c897e25961f58f4586a2048dd22f900e813863a1a95b7dffbbc9273650" exitCode=0 Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.488287 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d7d95f968-k2gh2" event={"ID":"7d34e443-c923-42f0-83be-3d060424380b","Type":"ContainerDied","Data":"4685b9c897e25961f58f4586a2048dd22f900e813863a1a95b7dffbbc9273650"} Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.494481 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"670bed16-1df0-4568-9305-886c7ec7a4f5","Type":"ContainerStarted","Data":"3ae9ac5bdfdec0c34b1b740a4a39d31bc04543392f8984fefc6fbd37ec89ac91"} Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.549007 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.652918 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.941760 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 12:29:16 crc kubenswrapper[4970]: I1209 12:29:16.972162 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.048088 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hh2d8"] Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.048349 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="dnsmasq-dns" containerID="cri-o://00ec22a037c7090aacc78ca1476cb1a5e32349adce131b74a0d6351f8898568a" gracePeriod=10 Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.510590 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"670bed16-1df0-4568-9305-886c7ec7a4f5","Type":"ContainerStarted","Data":"b7345b63abc8d81311ba356541f8b34cb4bd5f54933a6720d4b652b20ef06cab"} Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.511156 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.514154 4970 generic.go:334] "Generic (PLEG): container finished" podID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerID="00ec22a037c7090aacc78ca1476cb1a5e32349adce131b74a0d6351f8898568a" exitCode=0 Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.514817 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" event={"ID":"e7bb9b20-9449-48b4-b3ba-dc547cf39558","Type":"ContainerDied","Data":"00ec22a037c7090aacc78ca1476cb1a5e32349adce131b74a0d6351f8898568a"} Dec 09 12:29:17 crc kubenswrapper[4970]: I1209 12:29:17.705766 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.229832 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.355696 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-svc\") pod \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.355789 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbj2h\" (UniqueName: \"kubernetes.io/projected/e7bb9b20-9449-48b4-b3ba-dc547cf39558-kube-api-access-xbj2h\") pod \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.355866 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-config\") pod \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.356072 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-swift-storage-0\") pod \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.356133 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-sb\") pod \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.356317 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-nb\") pod \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\" (UID: \"e7bb9b20-9449-48b4-b3ba-dc547cf39558\") " Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.381816 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7bb9b20-9449-48b4-b3ba-dc547cf39558-kube-api-access-xbj2h" (OuterVolumeSpecName: "kube-api-access-xbj2h") pod "e7bb9b20-9449-48b4-b3ba-dc547cf39558" (UID: "e7bb9b20-9449-48b4-b3ba-dc547cf39558"). InnerVolumeSpecName "kube-api-access-xbj2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.459599 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbj2h\" (UniqueName: \"kubernetes.io/projected/e7bb9b20-9449-48b4-b3ba-dc547cf39558-kube-api-access-xbj2h\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.460974 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7bb9b20-9449-48b4-b3ba-dc547cf39558" (UID: "e7bb9b20-9449-48b4-b3ba-dc547cf39558"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.499758 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e7bb9b20-9449-48b4-b3ba-dc547cf39558" (UID: "e7bb9b20-9449-48b4-b3ba-dc547cf39558"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.513861 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-config" (OuterVolumeSpecName: "config") pod "e7bb9b20-9449-48b4-b3ba-dc547cf39558" (UID: "e7bb9b20-9449-48b4-b3ba-dc547cf39558"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.541711 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" event={"ID":"e7bb9b20-9449-48b4-b3ba-dc547cf39558","Type":"ContainerDied","Data":"481cd5910ecd9f792951d354066695b858d1c521dbb9525ce057c24440e68a9b"} Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.541783 4970 scope.go:117] "RemoveContainer" containerID="00ec22a037c7090aacc78ca1476cb1a5e32349adce131b74a0d6351f8898568a" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.541820 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="cinder-scheduler" containerID="cri-o://0b694e1a163e658d09bb45ac95e0347048bda76e2b601ec03add003bcd297d6e" gracePeriod=30 Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.541963 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="probe" containerID="cri-o://dbb38a47478ec9b2bf950fab5d94532e42e95519680e68965d217382d76aa793" gracePeriod=30 Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.542172 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.543945 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7bb9b20-9449-48b4-b3ba-dc547cf39558" (UID: "e7bb9b20-9449-48b4-b3ba-dc547cf39558"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.566739 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.567019 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.567089 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.567153 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.597970 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7bb9b20-9449-48b4-b3ba-dc547cf39558" (UID: "e7bb9b20-9449-48b4-b3ba-dc547cf39558"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.612485 4970 scope.go:117] "RemoveContainer" containerID="96968271128e8e14081dc2f1b7d817a9ec99a922f6a0acd7da5de5db02cba135" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.672706 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bb9b20-9449-48b4-b3ba-dc547cf39558-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:18 crc kubenswrapper[4970]: I1209 12:29:18.979580 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hh2d8"] Dec 09 12:29:19 crc kubenswrapper[4970]: I1209 12:29:19.029555 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hh2d8"] Dec 09 12:29:19 crc kubenswrapper[4970]: I1209 12:29:19.555804 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"670bed16-1df0-4568-9305-886c7ec7a4f5","Type":"ContainerStarted","Data":"8640f71e0ca4a382ed792a99cef419570ea13934299006a996a736dcc7bfdd5e"} Dec 09 12:29:19 crc kubenswrapper[4970]: I1209 12:29:19.557173 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 12:29:19 crc kubenswrapper[4970]: I1209 12:29:19.577877 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.577852378 podStartE2EDuration="5.577852378s" podCreationTimestamp="2025-12-09 12:29:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:19.572177917 +0000 UTC m=+1372.132658978" watchObservedRunningTime="2025-12-09 12:29:19.577852378 +0000 UTC m=+1372.138333429" Dec 09 12:29:19 crc kubenswrapper[4970]: I1209 12:29:19.831670 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" path="/var/lib/kubelet/pods/e7bb9b20-9449-48b4-b3ba-dc547cf39558/volumes" Dec 09 12:29:20 crc kubenswrapper[4970]: I1209 12:29:20.570374 4970 generic.go:334] "Generic (PLEG): container finished" podID="7d34e443-c923-42f0-83be-3d060424380b" containerID="aba0aaf5bec05a42ffca748467a7096415b9012cde958ccc33c9498e506b800f" exitCode=0 Dec 09 12:29:20 crc kubenswrapper[4970]: I1209 12:29:20.570463 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d7d95f968-k2gh2" event={"ID":"7d34e443-c923-42f0-83be-3d060424380b","Type":"ContainerDied","Data":"aba0aaf5bec05a42ffca748467a7096415b9012cde958ccc33c9498e506b800f"} Dec 09 12:29:20 crc kubenswrapper[4970]: I1209 12:29:20.575458 4970 generic.go:334] "Generic (PLEG): container finished" podID="266fe773-afd8-4703-9180-f325242bc850" containerID="dbb38a47478ec9b2bf950fab5d94532e42e95519680e68965d217382d76aa793" exitCode=0 Dec 09 12:29:20 crc kubenswrapper[4970]: I1209 12:29:20.576703 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"266fe773-afd8-4703-9180-f325242bc850","Type":"ContainerDied","Data":"dbb38a47478ec9b2bf950fab5d94532e42e95519680e68965d217382d76aa793"} Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.041591 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.074346 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4v5q\" (UniqueName: \"kubernetes.io/projected/7d34e443-c923-42f0-83be-3d060424380b-kube-api-access-r4v5q\") pod \"7d34e443-c923-42f0-83be-3d060424380b\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.074441 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-combined-ca-bundle\") pod \"7d34e443-c923-42f0-83be-3d060424380b\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.074537 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-httpd-config\") pod \"7d34e443-c923-42f0-83be-3d060424380b\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.074801 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-config\") pod \"7d34e443-c923-42f0-83be-3d060424380b\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.074833 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-ovndb-tls-certs\") pod \"7d34e443-c923-42f0-83be-3d060424380b\" (UID: \"7d34e443-c923-42f0-83be-3d060424380b\") " Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.110746 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7d34e443-c923-42f0-83be-3d060424380b" (UID: "7d34e443-c923-42f0-83be-3d060424380b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.121626 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d34e443-c923-42f0-83be-3d060424380b-kube-api-access-r4v5q" (OuterVolumeSpecName: "kube-api-access-r4v5q") pod "7d34e443-c923-42f0-83be-3d060424380b" (UID: "7d34e443-c923-42f0-83be-3d060424380b"). InnerVolumeSpecName "kube-api-access-r4v5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.148347 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-config" (OuterVolumeSpecName: "config") pod "7d34e443-c923-42f0-83be-3d060424380b" (UID: "7d34e443-c923-42f0-83be-3d060424380b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.163515 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d34e443-c923-42f0-83be-3d060424380b" (UID: "7d34e443-c923-42f0-83be-3d060424380b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.176154 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.176192 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4v5q\" (UniqueName: \"kubernetes.io/projected/7d34e443-c923-42f0-83be-3d060424380b-kube-api-access-r4v5q\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.176205 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.176213 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.214578 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7d34e443-c923-42f0-83be-3d060424380b" (UID: "7d34e443-c923-42f0-83be-3d060424380b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.278667 4970 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d34e443-c923-42f0-83be-3d060424380b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.587510 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6d7d95f968-k2gh2" event={"ID":"7d34e443-c923-42f0-83be-3d060424380b","Type":"ContainerDied","Data":"3d059000e0f7ef3d148cf33997353fd5710a9f1591a14b4dd6429c0ea0511ef4"} Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.587561 4970 scope.go:117] "RemoveContainer" containerID="4685b9c897e25961f58f4586a2048dd22f900e813863a1a95b7dffbbc9273650" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.587670 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6d7d95f968-k2gh2" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.633880 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6d7d95f968-k2gh2"] Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.646815 4970 scope.go:117] "RemoveContainer" containerID="aba0aaf5bec05a42ffca748467a7096415b9012cde958ccc33c9498e506b800f" Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.657030 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6d7d95f968-k2gh2"] Dec 09 12:29:21 crc kubenswrapper[4970]: I1209 12:29:21.827864 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d34e443-c923-42f0-83be-3d060424380b" path="/var/lib/kubelet/pods/7d34e443-c923-42f0-83be-3d060424380b/volumes" Dec 09 12:29:22 crc kubenswrapper[4970]: I1209 12:29:22.615504 4970 generic.go:334] "Generic (PLEG): container finished" podID="266fe773-afd8-4703-9180-f325242bc850" containerID="0b694e1a163e658d09bb45ac95e0347048bda76e2b601ec03add003bcd297d6e" exitCode=0 Dec 09 12:29:22 crc kubenswrapper[4970]: I1209 12:29:22.615547 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"266fe773-afd8-4703-9180-f325242bc850","Type":"ContainerDied","Data":"0b694e1a163e658d09bb45ac95e0347048bda76e2b601ec03add003bcd297d6e"} Dec 09 12:29:22 crc kubenswrapper[4970]: I1209 12:29:22.917552 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-hh2d8" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.186:5353: i/o timeout" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.386435 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.557349 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-scripts\") pod \"266fe773-afd8-4703-9180-f325242bc850\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.557448 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/266fe773-afd8-4703-9180-f325242bc850-etc-machine-id\") pod \"266fe773-afd8-4703-9180-f325242bc850\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.557604 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4f7h\" (UniqueName: \"kubernetes.io/projected/266fe773-afd8-4703-9180-f325242bc850-kube-api-access-q4f7h\") pod \"266fe773-afd8-4703-9180-f325242bc850\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.557634 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-combined-ca-bundle\") pod \"266fe773-afd8-4703-9180-f325242bc850\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.557736 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data-custom\") pod \"266fe773-afd8-4703-9180-f325242bc850\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.557777 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data\") pod \"266fe773-afd8-4703-9180-f325242bc850\" (UID: \"266fe773-afd8-4703-9180-f325242bc850\") " Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.558953 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266fe773-afd8-4703-9180-f325242bc850-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "266fe773-afd8-4703-9180-f325242bc850" (UID: "266fe773-afd8-4703-9180-f325242bc850"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.565455 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "266fe773-afd8-4703-9180-f325242bc850" (UID: "266fe773-afd8-4703-9180-f325242bc850"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.565505 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266fe773-afd8-4703-9180-f325242bc850-kube-api-access-q4f7h" (OuterVolumeSpecName: "kube-api-access-q4f7h") pod "266fe773-afd8-4703-9180-f325242bc850" (UID: "266fe773-afd8-4703-9180-f325242bc850"). InnerVolumeSpecName "kube-api-access-q4f7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.582870 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-scripts" (OuterVolumeSpecName: "scripts") pod "266fe773-afd8-4703-9180-f325242bc850" (UID: "266fe773-afd8-4703-9180-f325242bc850"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.632301 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"266fe773-afd8-4703-9180-f325242bc850","Type":"ContainerDied","Data":"162663de1225f0823c76c454733783754fc8a463dde57f3a8dc3ab045edc887f"} Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.632372 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.632425 4970 scope.go:117] "RemoveContainer" containerID="dbb38a47478ec9b2bf950fab5d94532e42e95519680e68965d217382d76aa793" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.659828 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "266fe773-afd8-4703-9180-f325242bc850" (UID: "266fe773-afd8-4703-9180-f325242bc850"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.660237 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.660273 4970 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/266fe773-afd8-4703-9180-f325242bc850-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.660290 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4f7h\" (UniqueName: \"kubernetes.io/projected/266fe773-afd8-4703-9180-f325242bc850-kube-api-access-q4f7h\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.660302 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.660312 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.698743 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data" (OuterVolumeSpecName: "config-data") pod "266fe773-afd8-4703-9180-f325242bc850" (UID: "266fe773-afd8-4703-9180-f325242bc850"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.776845 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/266fe773-afd8-4703-9180-f325242bc850-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.777351 4970 scope.go:117] "RemoveContainer" containerID="0b694e1a163e658d09bb45ac95e0347048bda76e2b601ec03add003bcd297d6e" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.922305 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-565ff75dc9-922w2" Dec 09 12:29:23 crc kubenswrapper[4970]: I1209 12:29:23.986586 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.019925 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.045332 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:24 crc kubenswrapper[4970]: E1209 12:29:24.045972 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="dnsmasq-dns" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.045999 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="dnsmasq-dns" Dec 09 12:29:24 crc kubenswrapper[4970]: E1209 12:29:24.046013 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-httpd" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046021 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-httpd" Dec 09 12:29:24 crc kubenswrapper[4970]: E1209 12:29:24.046065 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="probe" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046079 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="probe" Dec 09 12:29:24 crc kubenswrapper[4970]: E1209 12:29:24.046100 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="init" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046109 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="init" Dec 09 12:29:24 crc kubenswrapper[4970]: E1209 12:29:24.046120 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="cinder-scheduler" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046129 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="cinder-scheduler" Dec 09 12:29:24 crc kubenswrapper[4970]: E1209 12:29:24.046160 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-api" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046168 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-api" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046440 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="probe" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046460 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="266fe773-afd8-4703-9180-f325242bc850" containerName="cinder-scheduler" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046488 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7bb9b20-9449-48b4-b3ba-dc547cf39558" containerName="dnsmasq-dns" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046503 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-httpd" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.046524 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d34e443-c923-42f0-83be-3d060424380b" containerName="neutron-api" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.048069 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.051805 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.078913 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.096422 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjqvv\" (UniqueName: \"kubernetes.io/projected/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-kube-api-access-zjqvv\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.096801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-config-data\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.096944 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.097060 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.097194 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-scripts\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.097292 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.200327 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjqvv\" (UniqueName: \"kubernetes.io/projected/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-kube-api-access-zjqvv\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.200806 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-config-data\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.200925 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.201026 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.201157 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-scripts\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.201184 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.201548 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.207799 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.207819 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.209001 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-config-data\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.209469 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-scripts\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.218733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjqvv\" (UniqueName: \"kubernetes.io/projected/d89950ef-c3f6-46ae-aa45-65baf0c0fe66-kube-api-access-zjqvv\") pod \"cinder-scheduler-0\" (UID: \"d89950ef-c3f6-46ae-aa45-65baf0c0fe66\") " pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.381139 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.771233 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.875903 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c8c76f86b-vkq6w" Dec 09 12:29:24 crc kubenswrapper[4970]: I1209 12:29:24.980993 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.028334 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-659c886f58-rf5rp"] Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.028592 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-659c886f58-rf5rp" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api-log" containerID="cri-o://9d76518a82e06dc234546f740a382a793257f3a107e2a2e937e90f24de4fe6a9" gracePeriod=30 Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.029446 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-659c886f58-rf5rp" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api" containerID="cri-o://2574a2f04562eee00b2f34a5279868eea923424d941c81e70f8bf3311273d595" gracePeriod=30 Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.677396 4970 generic.go:334] "Generic (PLEG): container finished" podID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerID="9d76518a82e06dc234546f740a382a793257f3a107e2a2e937e90f24de4fe6a9" exitCode=143 Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.677471 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659c886f58-rf5rp" event={"ID":"f85130f1-2801-40b1-b1d2-3cfe0b6aee57","Type":"ContainerDied","Data":"9d76518a82e06dc234546f740a382a793257f3a107e2a2e937e90f24de4fe6a9"} Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.684847 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d89950ef-c3f6-46ae-aa45-65baf0c0fe66","Type":"ContainerStarted","Data":"7363f971d79274c5e0d21827bb1d5ab2d9b2718c3bc9df3779dbf3c0534bd360"} Dec 09 12:29:25 crc kubenswrapper[4970]: I1209 12:29:25.825676 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="266fe773-afd8-4703-9180-f325242bc850" path="/var/lib/kubelet/pods/266fe773-afd8-4703-9180-f325242bc850/volumes" Dec 09 12:29:26 crc kubenswrapper[4970]: I1209 12:29:26.696627 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d89950ef-c3f6-46ae-aa45-65baf0c0fe66","Type":"ContainerStarted","Data":"16748be4b54f19f6c269e8019a12a18a1e157acf09b823b3fffac4b74fc21903"} Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.339787 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.341506 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.344600 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-btzdr" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.344778 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.344959 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.360086 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.487901 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26afaabd-9309-47db-a9fd-282425d0c44e-openstack-config\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.487996 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26afaabd-9309-47db-a9fd-282425d0c44e-openstack-config-secret\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.488048 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26afaabd-9309-47db-a9fd-282425d0c44e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.488613 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv84\" (UniqueName: \"kubernetes.io/projected/26afaabd-9309-47db-a9fd-282425d0c44e-kube-api-access-4pv84\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.590483 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26afaabd-9309-47db-a9fd-282425d0c44e-openstack-config\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.590549 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26afaabd-9309-47db-a9fd-282425d0c44e-openstack-config-secret\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.590583 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26afaabd-9309-47db-a9fd-282425d0c44e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.590685 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pv84\" (UniqueName: \"kubernetes.io/projected/26afaabd-9309-47db-a9fd-282425d0c44e-kube-api-access-4pv84\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.591668 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/26afaabd-9309-47db-a9fd-282425d0c44e-openstack-config\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.596401 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/26afaabd-9309-47db-a9fd-282425d0c44e-openstack-config-secret\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.616621 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26afaabd-9309-47db-a9fd-282425d0c44e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.627134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pv84\" (UniqueName: \"kubernetes.io/projected/26afaabd-9309-47db-a9fd-282425d0c44e-kube-api-access-4pv84\") pod \"openstackclient\" (UID: \"26afaabd-9309-47db-a9fd-282425d0c44e\") " pod="openstack/openstackclient" Dec 09 12:29:27 crc kubenswrapper[4970]: I1209 12:29:27.717264 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 12:29:28 crc kubenswrapper[4970]: I1209 12:29:28.495780 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 12:29:28 crc kubenswrapper[4970]: W1209 12:29:28.496823 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26afaabd_9309_47db_a9fd_282425d0c44e.slice/crio-02cf086c2518e46ca283f1f17e611243a5894bff81aabdbf796ad01e2dcf7094 WatchSource:0}: Error finding container 02cf086c2518e46ca283f1f17e611243a5894bff81aabdbf796ad01e2dcf7094: Status 404 returned error can't find the container with id 02cf086c2518e46ca283f1f17e611243a5894bff81aabdbf796ad01e2dcf7094 Dec 09 12:29:28 crc kubenswrapper[4970]: I1209 12:29:28.718917 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"26afaabd-9309-47db-a9fd-282425d0c44e","Type":"ContainerStarted","Data":"02cf086c2518e46ca283f1f17e611243a5894bff81aabdbf796ad01e2dcf7094"} Dec 09 12:29:28 crc kubenswrapper[4970]: I1209 12:29:28.722032 4970 generic.go:334] "Generic (PLEG): container finished" podID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerID="2574a2f04562eee00b2f34a5279868eea923424d941c81e70f8bf3311273d595" exitCode=0 Dec 09 12:29:28 crc kubenswrapper[4970]: I1209 12:29:28.722070 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659c886f58-rf5rp" event={"ID":"f85130f1-2801-40b1-b1d2-3cfe0b6aee57","Type":"ContainerDied","Data":"2574a2f04562eee00b2f34a5279868eea923424d941c81e70f8bf3311273d595"} Dec 09 12:29:28 crc kubenswrapper[4970]: I1209 12:29:28.725259 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"d89950ef-c3f6-46ae-aa45-65baf0c0fe66","Type":"ContainerStarted","Data":"c893674845c61c42ae46e7297e3b299f244b2b00b32ee1117673d9d0d98883ad"} Dec 09 12:29:28 crc kubenswrapper[4970]: I1209 12:29:28.752334 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.7523145289999995 podStartE2EDuration="5.752314529s" podCreationTimestamp="2025-12-09 12:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:28.741314608 +0000 UTC m=+1381.301795659" watchObservedRunningTime="2025-12-09 12:29:28.752314529 +0000 UTC m=+1381.312795580" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.382627 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.878028 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.942520 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="670bed16-1df0-4568-9305-886c7ec7a4f5" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.201:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.943964 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data-custom\") pod \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.944022 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgsj\" (UniqueName: \"kubernetes.io/projected/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-kube-api-access-vjgsj\") pod \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.944185 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-combined-ca-bundle\") pod \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.944244 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-logs\") pod \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.944488 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data\") pod \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\" (UID: \"f85130f1-2801-40b1-b1d2-3cfe0b6aee57\") " Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.945898 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-logs" (OuterVolumeSpecName: "logs") pod "f85130f1-2801-40b1-b1d2-3cfe0b6aee57" (UID: "f85130f1-2801-40b1-b1d2-3cfe0b6aee57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.946461 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.955572 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f85130f1-2801-40b1-b1d2-3cfe0b6aee57" (UID: "f85130f1-2801-40b1-b1d2-3cfe0b6aee57"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:29 crc kubenswrapper[4970]: I1209 12:29:29.967694 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-kube-api-access-vjgsj" (OuterVolumeSpecName: "kube-api-access-vjgsj") pod "f85130f1-2801-40b1-b1d2-3cfe0b6aee57" (UID: "f85130f1-2801-40b1-b1d2-3cfe0b6aee57"). InnerVolumeSpecName "kube-api-access-vjgsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.029367 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data" (OuterVolumeSpecName: "config-data") pod "f85130f1-2801-40b1-b1d2-3cfe0b6aee57" (UID: "f85130f1-2801-40b1-b1d2-3cfe0b6aee57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.033521 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f85130f1-2801-40b1-b1d2-3cfe0b6aee57" (UID: "f85130f1-2801-40b1-b1d2-3cfe0b6aee57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.056216 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.056278 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgsj\" (UniqueName: \"kubernetes.io/projected/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-kube-api-access-vjgsj\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.056294 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.056306 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f85130f1-2801-40b1-b1d2-3cfe0b6aee57-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.749647 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-659c886f58-rf5rp" event={"ID":"f85130f1-2801-40b1-b1d2-3cfe0b6aee57","Type":"ContainerDied","Data":"f789c9ba5d5a0f9aa3301c911112594e569bc3b6176181a79d1f79cb4f794202"} Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.749682 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-659c886f58-rf5rp" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.750007 4970 scope.go:117] "RemoveContainer" containerID="2574a2f04562eee00b2f34a5279868eea923424d941c81e70f8bf3311273d595" Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.790589 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-659c886f58-rf5rp"] Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.800759 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-659c886f58-rf5rp"] Dec 09 12:29:30 crc kubenswrapper[4970]: I1209 12:29:30.806619 4970 scope.go:117] "RemoveContainer" containerID="9d76518a82e06dc234546f740a382a793257f3a107e2a2e937e90f24de4fe6a9" Dec 09 12:29:31 crc kubenswrapper[4970]: I1209 12:29:31.840695 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" path="/var/lib/kubelet/pods/f85130f1-2801-40b1-b1d2-3cfe0b6aee57/volumes" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.155845 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.329669 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-8679c95d76-9dntl"] Dec 09 12:29:32 crc kubenswrapper[4970]: E1209 12:29:32.330228 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.330271 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api" Dec 09 12:29:32 crc kubenswrapper[4970]: E1209 12:29:32.330305 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api-log" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.330315 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api-log" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.330575 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api-log" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.330614 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.331462 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.339917 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-lp2xp" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.339923 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.340165 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.358007 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8679c95d76-9dntl"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.423837 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.423938 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data-custom\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.424050 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-combined-ca-bundle\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.424598 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wfk\" (UniqueName: \"kubernetes.io/projected/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-kube-api-access-t2wfk\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.526218 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wfk\" (UniqueName: \"kubernetes.io/projected/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-kube-api-access-t2wfk\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.526282 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.526344 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data-custom\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.526364 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-combined-ca-bundle\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.541171 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-combined-ca-bundle\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.541379 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data-custom\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.545835 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.547796 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5ccf77f85c-b5zqk"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.549760 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.560352 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.562109 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wfk\" (UniqueName: \"kubernetes.io/projected/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-kube-api-access-t2wfk\") pod \"heat-engine-8679c95d76-9dntl\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.574392 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5ccf77f85c-b5zqk"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.592798 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5ctr9"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.629936 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-combined-ca-bundle\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.630009 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.630028 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzswx\" (UniqueName: \"kubernetes.io/projected/d4e7db93-1016-47dd-a978-bef88b6592ae-kube-api-access-pzswx\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.630126 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data-custom\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.651155 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5ctr9"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.651606 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.683932 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.689154 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6b9b88c557-ds6f2"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.690839 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.700498 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.722677 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b9b88c557-ds6f2"] Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732713 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-combined-ca-bundle\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732774 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732852 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732873 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzswx\" (UniqueName: \"kubernetes.io/projected/d4e7db93-1016-47dd-a978-bef88b6592ae-kube-api-access-pzswx\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732897 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732924 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl6gs\" (UniqueName: \"kubernetes.io/projected/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-kube-api-access-bl6gs\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.732980 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-combined-ca-bundle\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.733041 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data-custom\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.733077 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbnhd\" (UniqueName: \"kubernetes.io/projected/e36a2df8-01ba-4868-8903-8b753488ea78-kube-api-access-wbnhd\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.733099 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.733136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data-custom\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.733198 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-config\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.733233 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.751546 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-combined-ca-bundle\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.764238 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.779149 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzswx\" (UniqueName: \"kubernetes.io/projected/d4e7db93-1016-47dd-a978-bef88b6592ae-kube-api-access-pzswx\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.796972 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data-custom\") pod \"heat-cfnapi-5ccf77f85c-b5zqk\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837112 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data-custom\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837157 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbnhd\" (UniqueName: \"kubernetes.io/projected/e36a2df8-01ba-4868-8903-8b753488ea78-kube-api-access-wbnhd\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837178 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-config\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837282 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837325 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837341 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837381 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837401 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bl6gs\" (UniqueName: \"kubernetes.io/projected/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-kube-api-access-bl6gs\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.837441 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-combined-ca-bundle\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.839442 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.840274 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.840880 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.841706 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-config\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.841852 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.849599 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.856031 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-combined-ca-bundle\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.867856 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bl6gs\" (UniqueName: \"kubernetes.io/projected/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-kube-api-access-bl6gs\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.868569 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data-custom\") pod \"heat-api-6b9b88c557-ds6f2\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.886992 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbnhd\" (UniqueName: \"kubernetes.io/projected/e36a2df8-01ba-4868-8903-8b753488ea78-kube-api-access-wbnhd\") pod \"dnsmasq-dns-7756b9d78c-5ctr9\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.911690 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:32 crc kubenswrapper[4970]: I1209 12:29:32.926675 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:33 crc kubenswrapper[4970]: I1209 12:29:33.022881 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:33 crc kubenswrapper[4970]: I1209 12:29:33.741302 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-8679c95d76-9dntl"] Dec 09 12:29:33 crc kubenswrapper[4970]: I1209 12:29:33.865530 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8679c95d76-9dntl" event={"ID":"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a","Type":"ContainerStarted","Data":"7b67f42603dde57367769224eb4aca8194f0caa1dbac63a2d27b6b62851d199c"} Dec 09 12:29:33 crc kubenswrapper[4970]: I1209 12:29:33.962929 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6b9b88c557-ds6f2"] Dec 09 12:29:34 crc kubenswrapper[4970]: W1209 12:29:34.041008 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4e7db93_1016_47dd_a978_bef88b6592ae.slice/crio-6c110e8bd050cbe6f228e583fd6977824f07ab5ad78567d805250d0c29224746 WatchSource:0}: Error finding container 6c110e8bd050cbe6f228e583fd6977824f07ab5ad78567d805250d0c29224746: Status 404 returned error can't find the container with id 6c110e8bd050cbe6f228e583fd6977824f07ab5ad78567d805250d0c29224746 Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.043507 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5ccf77f85c-b5zqk"] Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.202568 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5ctr9"] Dec 09 12:29:34 crc kubenswrapper[4970]: W1209 12:29:34.208444 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode36a2df8_01ba_4868_8903_8b753488ea78.slice/crio-2fc4b2647ad75948d033ce816cad22b2f745538d84240a5e38452c9b03ad6509 WatchSource:0}: Error finding container 2fc4b2647ad75948d033ce816cad22b2f745538d84240a5e38452c9b03ad6509: Status 404 returned error can't find the container with id 2fc4b2647ad75948d033ce816cad22b2f745538d84240a5e38452c9b03ad6509 Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.603922 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659c886f58-rf5rp" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.604964 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-659c886f58-rf5rp" podUID="f85130f1-2801-40b1-b1d2-3cfe0b6aee57" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.195:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.626418 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.888541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" event={"ID":"d4e7db93-1016-47dd-a978-bef88b6592ae","Type":"ContainerStarted","Data":"6c110e8bd050cbe6f228e583fd6977824f07ab5ad78567d805250d0c29224746"} Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.890846 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" event={"ID":"e36a2df8-01ba-4868-8903-8b753488ea78","Type":"ContainerStarted","Data":"2fc4b2647ad75948d033ce816cad22b2f745538d84240a5e38452c9b03ad6509"} Dec 09 12:29:34 crc kubenswrapper[4970]: I1209 12:29:34.892655 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b9b88c557-ds6f2" event={"ID":"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0","Type":"ContainerStarted","Data":"c3b646cd6c75b7f3ad9128ff1a4f9716ab0b140876eba52b629d75d8fb1e2809"} Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.612418 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.612948 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-central-agent" containerID="cri-o://981ff28711a9d13ecdcb4b17d9c5aea021cb200bde0bdde4fddc84731851afb3" gracePeriod=30 Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.614068 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="sg-core" containerID="cri-o://686b2bcf0b8fb6b546d34bb72b7518713b07abea4fe73d95e3f92a7d6fc59c38" gracePeriod=30 Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.614217 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="proxy-httpd" containerID="cri-o://da70dbe51a82df07c19619303502d069f21293c6e8895924a98327bac64b14e9" gracePeriod=30 Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.614304 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-notification-agent" containerID="cri-o://8192987060e21cf2ff856a260680cd25061367624c144b36669013a836276c92" gracePeriod=30 Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.839957 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.196:3000/\": read tcp 10.217.0.2:53940->10.217.0.196:3000: read: connection reset by peer" Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.925885 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8679c95d76-9dntl" event={"ID":"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a","Type":"ContainerStarted","Data":"a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e"} Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.925972 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.945171 4970 generic.go:334] "Generic (PLEG): container finished" podID="e36a2df8-01ba-4868-8903-8b753488ea78" containerID="fa317a5adf03adfe5fa2c741c5ef850b1b5231994e1df3c50dbb109bf4fe34e5" exitCode=0 Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.945338 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" event={"ID":"e36a2df8-01ba-4868-8903-8b753488ea78","Type":"ContainerDied","Data":"fa317a5adf03adfe5fa2c741c5ef850b1b5231994e1df3c50dbb109bf4fe34e5"} Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.960120 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerID="686b2bcf0b8fb6b546d34bb72b7518713b07abea4fe73d95e3f92a7d6fc59c38" exitCode=2 Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.960173 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerDied","Data":"686b2bcf0b8fb6b546d34bb72b7518713b07abea4fe73d95e3f92a7d6fc59c38"} Dec 09 12:29:35 crc kubenswrapper[4970]: I1209 12:29:35.970730 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-8679c95d76-9dntl" podStartSLOduration=3.970708248 podStartE2EDuration="3.970708248s" podCreationTimestamp="2025-12-09 12:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:35.946980579 +0000 UTC m=+1388.507461630" watchObservedRunningTime="2025-12-09 12:29:35.970708248 +0000 UTC m=+1388.531189299" Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.619372 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.196:3000/\": dial tcp 10.217.0.196:3000: connect: connection refused" Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.890469 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85bf8b6f7-tq5wr"] Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.892804 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.894802 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.896295 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.900640 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 09 12:29:36 crc kubenswrapper[4970]: I1209 12:29:36.918409 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85bf8b6f7-tq5wr"] Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.003877 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerID="da70dbe51a82df07c19619303502d069f21293c6e8895924a98327bac64b14e9" exitCode=0 Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.003910 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerID="981ff28711a9d13ecdcb4b17d9c5aea021cb200bde0bdde4fddc84731851afb3" exitCode=0 Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.004705 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerDied","Data":"da70dbe51a82df07c19619303502d069f21293c6e8895924a98327bac64b14e9"} Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.004753 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerDied","Data":"981ff28711a9d13ecdcb4b17d9c5aea021cb200bde0bdde4fddc84731851afb3"} Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045325 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac21407c-a381-4cbb-b26e-9556d92ae621-etc-swift\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045364 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-public-tls-certs\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045457 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xckjv\" (UniqueName: \"kubernetes.io/projected/ac21407c-a381-4cbb-b26e-9556d92ae621-kube-api-access-xckjv\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045507 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-config-data\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045564 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac21407c-a381-4cbb-b26e-9556d92ae621-log-httpd\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045722 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-internal-tls-certs\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045840 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-combined-ca-bundle\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.045860 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac21407c-a381-4cbb-b26e-9556d92ae621-run-httpd\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.147909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac21407c-a381-4cbb-b26e-9556d92ae621-etc-swift\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.147952 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-public-tls-certs\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148007 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xckjv\" (UniqueName: \"kubernetes.io/projected/ac21407c-a381-4cbb-b26e-9556d92ae621-kube-api-access-xckjv\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148043 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-config-data\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148125 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac21407c-a381-4cbb-b26e-9556d92ae621-log-httpd\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148201 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-internal-tls-certs\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148317 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-combined-ca-bundle\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148345 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac21407c-a381-4cbb-b26e-9556d92ae621-run-httpd\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.148960 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac21407c-a381-4cbb-b26e-9556d92ae621-run-httpd\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.150608 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac21407c-a381-4cbb-b26e-9556d92ae621-log-httpd\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.153080 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-internal-tls-certs\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.154172 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-public-tls-certs\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.154472 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ac21407c-a381-4cbb-b26e-9556d92ae621-etc-swift\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.155136 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-config-data\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.156034 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac21407c-a381-4cbb-b26e-9556d92ae621-combined-ca-bundle\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.171625 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xckjv\" (UniqueName: \"kubernetes.io/projected/ac21407c-a381-4cbb-b26e-9556d92ae621-kube-api-access-xckjv\") pod \"swift-proxy-85bf8b6f7-tq5wr\" (UID: \"ac21407c-a381-4cbb-b26e-9556d92ae621\") " pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:37 crc kubenswrapper[4970]: I1209 12:29:37.211455 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.028769 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerID="8192987060e21cf2ff856a260680cd25061367624c144b36669013a836276c92" exitCode=0 Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.028832 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerDied","Data":"8192987060e21cf2ff856a260680cd25061367624c144b36669013a836276c92"} Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.537806 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8ff4d7fb5-kwj5g"] Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.539734 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.549965 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8655ffb758-nbr8w"] Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.551473 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.565738 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6996656b77-zt25p"] Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.567359 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.579496 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8ff4d7fb5-kwj5g"] Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.607234 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8655ffb758-nbr8w"] Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623238 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623390 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data-custom\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623416 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-config-data-custom\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623485 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-combined-ca-bundle\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623541 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623658 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dzm5\" (UniqueName: \"kubernetes.io/projected/57effba7-488e-4045-8da6-83bf8e4770d8-kube-api-access-8dzm5\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623705 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-config-data\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623784 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6996656b77-zt25p"] Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623835 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-combined-ca-bundle\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623873 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-combined-ca-bundle\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.623904 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84qk6\" (UniqueName: \"kubernetes.io/projected/83510120-26cc-4c80-be70-788fc8d78ba2-kube-api-access-84qk6\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.624002 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9x7\" (UniqueName: \"kubernetes.io/projected/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-kube-api-access-vm9x7\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.624198 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data-custom\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726309 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-combined-ca-bundle\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726370 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-combined-ca-bundle\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726409 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84qk6\" (UniqueName: \"kubernetes.io/projected/83510120-26cc-4c80-be70-788fc8d78ba2-kube-api-access-84qk6\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726479 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9x7\" (UniqueName: \"kubernetes.io/projected/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-kube-api-access-vm9x7\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726605 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data-custom\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726686 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726732 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data-custom\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726758 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-config-data-custom\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726829 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-combined-ca-bundle\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726900 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.726975 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dzm5\" (UniqueName: \"kubernetes.io/projected/57effba7-488e-4045-8da6-83bf8e4770d8-kube-api-access-8dzm5\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.727006 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-config-data\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.748320 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-config-data\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.751545 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.753754 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.755696 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-combined-ca-bundle\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.756885 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84qk6\" (UniqueName: \"kubernetes.io/projected/83510120-26cc-4c80-be70-788fc8d78ba2-kube-api-access-84qk6\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.774211 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-combined-ca-bundle\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.774632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-combined-ca-bundle\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.775423 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9x7\" (UniqueName: \"kubernetes.io/projected/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-kube-api-access-vm9x7\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.775451 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c25a77e0-9bbd-4ff2-b53c-ecb4712198b1-config-data-custom\") pod \"heat-engine-6996656b77-zt25p\" (UID: \"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1\") " pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.775696 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data-custom\") pod \"heat-api-8ff4d7fb5-kwj5g\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.779081 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data-custom\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.800455 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dzm5\" (UniqueName: \"kubernetes.io/projected/57effba7-488e-4045-8da6-83bf8e4770d8-kube-api-access-8dzm5\") pod \"heat-cfnapi-8655ffb758-nbr8w\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.860138 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.872157 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:39 crc kubenswrapper[4970]: I1209 12:29:39.886170 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.372041 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b9b88c557-ds6f2"] Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.421600 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5ccf77f85c-b5zqk"] Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.438469 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6f965965cd-jt4tk"] Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.439943 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.446662 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.446933 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.474237 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f965965cd-jt4tk"] Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.566489 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-config-data\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.566904 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-config-data-custom\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.566972 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsknm\" (UniqueName: \"kubernetes.io/projected/5b114785-502f-476f-a0d5-8ba13694acbc-kube-api-access-qsknm\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.566993 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-public-tls-certs\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.567058 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-combined-ca-bundle\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.567094 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-internal-tls-certs\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.646919 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-657b4c4594-7x65f"] Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.652495 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.657665 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.657914 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.669696 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-config-data-custom\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.669828 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsknm\" (UniqueName: \"kubernetes.io/projected/5b114785-502f-476f-a0d5-8ba13694acbc-kube-api-access-qsknm\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.669931 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-public-tls-certs\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.669958 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-combined-ca-bundle\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.670019 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-internal-tls-certs\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.673745 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-config-data\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.679148 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-config-data-custom\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.679436 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-public-tls-certs\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.680008 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-657b4c4594-7x65f"] Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.681975 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-combined-ca-bundle\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.684090 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-config-data\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.695222 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b114785-502f-476f-a0d5-8ba13694acbc-internal-tls-certs\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.697082 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsknm\" (UniqueName: \"kubernetes.io/projected/5b114785-502f-476f-a0d5-8ba13694acbc-kube-api-access-qsknm\") pod \"heat-api-6f965965cd-jt4tk\" (UID: \"5b114785-502f-476f-a0d5-8ba13694acbc\") " pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.775583 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-combined-ca-bundle\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.775632 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-internal-tls-certs\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.775723 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-config-data\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.775774 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-config-data-custom\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.775809 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxgk\" (UniqueName: \"kubernetes.io/projected/8051115f-1bf5-4043-b6b0-967d469c0d6a-kube-api-access-twxgk\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.775884 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-public-tls-certs\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.878460 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-public-tls-certs\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.878529 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-combined-ca-bundle\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.878558 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-internal-tls-certs\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.878722 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-config-data\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.878849 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-config-data-custom\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.878912 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twxgk\" (UniqueName: \"kubernetes.io/projected/8051115f-1bf5-4043-b6b0-967d469c0d6a-kube-api-access-twxgk\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.880266 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.886283 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-config-data-custom\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.887200 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-internal-tls-certs\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.888164 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-public-tls-certs\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.888570 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-config-data\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.891468 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8051115f-1bf5-4043-b6b0-967d469c0d6a-combined-ca-bundle\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:44 crc kubenswrapper[4970]: I1209 12:29:44.901493 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twxgk\" (UniqueName: \"kubernetes.io/projected/8051115f-1bf5-4043-b6b0-967d469c0d6a-kube-api-access-twxgk\") pod \"heat-cfnapi-657b4c4594-7x65f\" (UID: \"8051115f-1bf5-4043-b6b0-967d469c0d6a\") " pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:45 crc kubenswrapper[4970]: I1209 12:29:45.109563 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:47 crc kubenswrapper[4970]: E1209 12:29:47.477827 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Dec 09 12:29:47 crc kubenswrapper[4970]: E1209 12:29:47.478336 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75hfbh8ch56ch64dh568h679h6dh559h648h5c8hbchdh545h568h66dh678hcch578hbbh575hdfhd8h666hd8hb5hfbh57bh5fdh5b7h66ch5f9q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4pv84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(26afaabd-9309-47db-a9fd-282425d0c44e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 12:29:47 crc kubenswrapper[4970]: E1209 12:29:47.479602 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="26afaabd-9309-47db-a9fd-282425d0c44e" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.150306 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.210444 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.211484 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c2a60c14-d830-48c0-ab9e-d66e1ad768b0","Type":"ContainerDied","Data":"8deef2b1e60d5a2badb2d9ec3b1f345580b217c62026728ec02a5f70835d88c4"} Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.211539 4970 scope.go:117] "RemoveContainer" containerID="da70dbe51a82df07c19619303502d069f21293c6e8895924a98327bac64b14e9" Dec 09 12:29:48 crc kubenswrapper[4970]: E1209 12:29:48.269680 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="26afaabd-9309-47db-a9fd-282425d0c44e" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.300787 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-sg-core-conf-yaml\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.300944 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-config-data\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.302319 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-log-httpd\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.302806 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zglf\" (UniqueName: \"kubernetes.io/projected/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-kube-api-access-9zglf\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.302870 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-run-httpd\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.302934 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-scripts\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.303013 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-combined-ca-bundle\") pod \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\" (UID: \"c2a60c14-d830-48c0-ab9e-d66e1ad768b0\") " Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.307774 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.309462 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.321529 4970 scope.go:117] "RemoveContainer" containerID="686b2bcf0b8fb6b546d34bb72b7518713b07abea4fe73d95e3f92a7d6fc59c38" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.405073 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-scripts" (OuterVolumeSpecName: "scripts") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.406695 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.406809 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.406982 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.408468 4970 scope.go:117] "RemoveContainer" containerID="8192987060e21cf2ff856a260680cd25061367624c144b36669013a836276c92" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.409838 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-kube-api-access-9zglf" (OuterVolumeSpecName: "kube-api-access-9zglf") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "kube-api-access-9zglf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.461045 4970 scope.go:117] "RemoveContainer" containerID="981ff28711a9d13ecdcb4b17d9c5aea021cb200bde0bdde4fddc84731851afb3" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.512474 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zglf\" (UniqueName: \"kubernetes.io/projected/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-kube-api-access-9zglf\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.533452 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.627932 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.711706 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.715504 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-config-data" (OuterVolumeSpecName: "config-data") pod "c2a60c14-d830-48c0-ab9e-d66e1ad768b0" (UID: "c2a60c14-d830-48c0-ab9e-d66e1ad768b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.730804 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.730844 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2a60c14-d830-48c0-ab9e-d66e1ad768b0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.882847 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6996656b77-zt25p"] Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.929317 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8ff4d7fb5-kwj5g"] Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.950307 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:48 crc kubenswrapper[4970]: I1209 12:29:48.998680 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-657b4c4594-7x65f"] Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.017478 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.037362 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:49 crc kubenswrapper[4970]: E1209 12:29:49.037976 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="proxy-httpd" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.037994 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="proxy-httpd" Dec 09 12:29:49 crc kubenswrapper[4970]: E1209 12:29:49.038019 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-central-agent" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038028 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-central-agent" Dec 09 12:29:49 crc kubenswrapper[4970]: E1209 12:29:49.038049 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-notification-agent" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038056 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-notification-agent" Dec 09 12:29:49 crc kubenswrapper[4970]: E1209 12:29:49.038075 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="sg-core" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038082 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="sg-core" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038364 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="sg-core" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038391 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-notification-agent" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038407 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="proxy-httpd" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.038435 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" containerName="ceilometer-central-agent" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.041278 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.045813 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.052217 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.102322 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.140927 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6f965965cd-jt4tk"] Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.177432 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8655ffb758-nbr8w"] Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179337 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179398 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179503 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-log-httpd\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179591 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-scripts\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179675 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-run-httpd\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179804 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-config-data\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.179854 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j977\" (UniqueName: \"kubernetes.io/projected/7fff75d8-7b4d-47e2-893f-a401f568394e-kube-api-access-5j977\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.184233 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85bf8b6f7-tq5wr"] Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.275416 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" event={"ID":"e36a2df8-01ba-4868-8903-8b753488ea78","Type":"ContainerStarted","Data":"258cea5cfa792915c382b3faa727ea7d7c5cc9aab9fcf2a628fd5613341be154"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.275500 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282355 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-log-httpd\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282740 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-scripts\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-run-httpd\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282861 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-config-data\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282888 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j977\" (UniqueName: \"kubernetes.io/projected/7fff75d8-7b4d-47e2-893f-a401f568394e-kube-api-access-5j977\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282903 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-log-httpd\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282962 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282986 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.283209 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-run-httpd\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.282442 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6996656b77-zt25p" event={"ID":"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1","Type":"ContainerStarted","Data":"3b51f8041984f65141efabaa916a9839e18917405824892f79e4ad213b96ad09"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.291087 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8ff4d7fb5-kwj5g" event={"ID":"83510120-26cc-4c80-be70-788fc8d78ba2","Type":"ContainerStarted","Data":"1934901639e76c7b4339d909d5dca9a97f32917cabc57cc24aa40bc4d483a072"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.292114 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.294353 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-scripts\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.296667 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" podStartSLOduration=17.296652853 podStartE2EDuration="17.296652853s" podCreationTimestamp="2025-12-09 12:29:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:49.293595892 +0000 UTC m=+1401.854076943" watchObservedRunningTime="2025-12-09 12:29:49.296652853 +0000 UTC m=+1401.857133904" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.297788 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b9b88c557-ds6f2" event={"ID":"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0","Type":"ContainerStarted","Data":"5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.297953 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6b9b88c557-ds6f2" podUID="9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" containerName="heat-api" containerID="cri-o://5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547" gracePeriod=60 Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.298046 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.305981 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.306728 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-config-data\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.309863 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j977\" (UniqueName: \"kubernetes.io/projected/7fff75d8-7b4d-47e2-893f-a401f568394e-kube-api-access-5j977\") pod \"ceilometer-0\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.318950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" event={"ID":"d4e7db93-1016-47dd-a978-bef88b6592ae","Type":"ContainerStarted","Data":"ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.319079 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" podUID="d4e7db93-1016-47dd-a978-bef88b6592ae" containerName="heat-cfnapi" containerID="cri-o://ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63" gracePeriod=60 Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.319147 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.344851 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6b9b88c557-ds6f2" podStartSLOduration=3.793327753 podStartE2EDuration="17.344831631s" podCreationTimestamp="2025-12-09 12:29:32 +0000 UTC" firstStartedPulling="2025-12-09 12:29:33.961220549 +0000 UTC m=+1386.521701600" lastFinishedPulling="2025-12-09 12:29:47.512724427 +0000 UTC m=+1400.073205478" observedRunningTime="2025-12-09 12:29:49.33086637 +0000 UTC m=+1401.891347421" watchObservedRunningTime="2025-12-09 12:29:49.344831631 +0000 UTC m=+1401.905312682" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.350446 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f965965cd-jt4tk" event={"ID":"5b114785-502f-476f-a0d5-8ba13694acbc","Type":"ContainerStarted","Data":"e66139f0d007870a1e892956d964932399825a30904cbd0be989e1fd0b6e0f4e"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.362319 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-657b4c4594-7x65f" event={"ID":"8051115f-1bf5-4043-b6b0-967d469c0d6a","Type":"ContainerStarted","Data":"aec70d874fc7fa9630a82e3841328c47173a640d1d6400e72e9eb2a47fbadd9e"} Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.375584 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:29:49 crc kubenswrapper[4970]: I1209 12:29:49.852449 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2a60c14-d830-48c0-ab9e-d66e1ad768b0" path="/var/lib/kubelet/pods/c2a60c14-d830-48c0-ab9e-d66e1ad768b0/volumes" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.032513 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" podStartSLOduration=4.472400505 podStartE2EDuration="18.0324713s" podCreationTimestamp="2025-12-09 12:29:32 +0000 UTC" firstStartedPulling="2025-12-09 12:29:34.04723075 +0000 UTC m=+1386.607711801" lastFinishedPulling="2025-12-09 12:29:47.607301545 +0000 UTC m=+1400.167782596" observedRunningTime="2025-12-09 12:29:49.354488427 +0000 UTC m=+1401.914969478" watchObservedRunningTime="2025-12-09 12:29:50.0324713 +0000 UTC m=+1402.592952351" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.041600 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:29:50 crc kubenswrapper[4970]: W1209 12:29:50.054677 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fff75d8_7b4d_47e2_893f_a401f568394e.slice/crio-a1932d5962877aac65333e021fb7f492794c778ede4dacaefc477cef5b271fd4 WatchSource:0}: Error finding container a1932d5962877aac65333e021fb7f492794c778ede4dacaefc477cef5b271fd4: Status 404 returned error can't find the container with id a1932d5962877aac65333e021fb7f492794c778ede4dacaefc477cef5b271fd4 Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.234661 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.362592 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data-custom\") pod \"d4e7db93-1016-47dd-a978-bef88b6592ae\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.362952 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzswx\" (UniqueName: \"kubernetes.io/projected/d4e7db93-1016-47dd-a978-bef88b6592ae-kube-api-access-pzswx\") pod \"d4e7db93-1016-47dd-a978-bef88b6592ae\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.363155 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data\") pod \"d4e7db93-1016-47dd-a978-bef88b6592ae\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.363343 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-combined-ca-bundle\") pod \"d4e7db93-1016-47dd-a978-bef88b6592ae\" (UID: \"d4e7db93-1016-47dd-a978-bef88b6592ae\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.372561 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.381496 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d4e7db93-1016-47dd-a978-bef88b6592ae" (UID: "d4e7db93-1016-47dd-a978-bef88b6592ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.381675 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e7db93-1016-47dd-a978-bef88b6592ae-kube-api-access-pzswx" (OuterVolumeSpecName: "kube-api-access-pzswx") pod "d4e7db93-1016-47dd-a978-bef88b6592ae" (UID: "d4e7db93-1016-47dd-a978-bef88b6592ae"). InnerVolumeSpecName "kube-api-access-pzswx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.417851 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" event={"ID":"ac21407c-a381-4cbb-b26e-9556d92ae621","Type":"ContainerStarted","Data":"e9b27bdcced72cab52ea144a30e9689dca41c031c419b8fa31a99b525ed804ca"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.417914 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.417927 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" event={"ID":"ac21407c-a381-4cbb-b26e-9556d92ae621","Type":"ContainerStarted","Data":"459f897b681f69b43c20688215b4c0226949ca0dc9934f1f4131e957cb6957b5"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.417939 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" event={"ID":"ac21407c-a381-4cbb-b26e-9556d92ae621","Type":"ContainerStarted","Data":"1937afce8146cbb277d9c2b865431851f353270b5922a21c52a523192dba36df"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.417966 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.423601 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerStarted","Data":"a1932d5962877aac65333e021fb7f492794c778ede4dacaefc477cef5b271fd4"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.436632 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" event={"ID":"57effba7-488e-4045-8da6-83bf8e4770d8","Type":"ContainerStarted","Data":"09a263206eb06787743c53007205aded7ccb5f029df44b25050e1d79fba08a9a"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.436677 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" event={"ID":"57effba7-488e-4045-8da6-83bf8e4770d8","Type":"ContainerStarted","Data":"7b69d176d4f659a9e9f34515e3239b94d1a0a544f6a3630d1f924eb38c323cef"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.437689 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.441155 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4e7db93-1016-47dd-a978-bef88b6592ae" (UID: "d4e7db93-1016-47dd-a978-bef88b6592ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.446179 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6996656b77-zt25p" event={"ID":"c25a77e0-9bbd-4ff2-b53c-ecb4712198b1","Type":"ContainerStarted","Data":"c6254a942ddffd715f5accdd2dd4288fb86a503e34d6829be52ca32d570210e0"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.447082 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.488389 4970 generic.go:334] "Generic (PLEG): container finished" podID="83510120-26cc-4c80-be70-788fc8d78ba2" containerID="40aa585fd338df96cf85d1982fd28ffda9b4acbd2f72214deaa8c650f483baa6" exitCode=1 Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.488601 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8ff4d7fb5-kwj5g" event={"ID":"83510120-26cc-4c80-be70-788fc8d78ba2","Type":"ContainerDied","Data":"40aa585fd338df96cf85d1982fd28ffda9b4acbd2f72214deaa8c650f483baa6"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.489276 4970 scope.go:117] "RemoveContainer" containerID="40aa585fd338df96cf85d1982fd28ffda9b4acbd2f72214deaa8c650f483baa6" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.501736 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.501772 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.501794 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzswx\" (UniqueName: \"kubernetes.io/projected/d4e7db93-1016-47dd-a978-bef88b6592ae-kube-api-access-pzswx\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.529789 4970 generic.go:334] "Generic (PLEG): container finished" podID="9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" containerID="5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547" exitCode=0 Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.529883 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b9b88c557-ds6f2" event={"ID":"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0","Type":"ContainerDied","Data":"5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.529916 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6b9b88c557-ds6f2" event={"ID":"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0","Type":"ContainerDied","Data":"c3b646cd6c75b7f3ad9128ff1a4f9716ab0b140876eba52b629d75d8fb1e2809"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.529935 4970 scope.go:117] "RemoveContainer" containerID="5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.530081 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6b9b88c557-ds6f2" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.568451 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6f965965cd-jt4tk" event={"ID":"5b114785-502f-476f-a0d5-8ba13694acbc","Type":"ContainerStarted","Data":"69de1ec49ce101756ed8bd37862ea6d1362cac31dfdb8ef4026f642662c65a75"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.575310 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.579095 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" podStartSLOduration=14.579069588 podStartE2EDuration="14.579069588s" podCreationTimestamp="2025-12-09 12:29:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:50.44532641 +0000 UTC m=+1403.005807451" watchObservedRunningTime="2025-12-09 12:29:50.579069588 +0000 UTC m=+1403.139550669" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.604307 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl6gs\" (UniqueName: \"kubernetes.io/projected/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-kube-api-access-bl6gs\") pod \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.604363 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-combined-ca-bundle\") pod \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.604638 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data\") pod \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.604679 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data-custom\") pod \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\" (UID: \"9f9344ba-cd60-48a8-bbe9-304b8fef2cb0\") " Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.610810 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-kube-api-access-bl6gs" (OuterVolumeSpecName: "kube-api-access-bl6gs") pod "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" (UID: "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0"). InnerVolumeSpecName "kube-api-access-bl6gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.614580 4970 generic.go:334] "Generic (PLEG): container finished" podID="d4e7db93-1016-47dd-a978-bef88b6592ae" containerID="ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63" exitCode=0 Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.614634 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" event={"ID":"d4e7db93-1016-47dd-a978-bef88b6592ae","Type":"ContainerDied","Data":"ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.614700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" event={"ID":"d4e7db93-1016-47dd-a978-bef88b6592ae","Type":"ContainerDied","Data":"6c110e8bd050cbe6f228e583fd6977824f07ab5ad78567d805250d0c29224746"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.614756 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5ccf77f85c-b5zqk" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.641613 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-657b4c4594-7x65f" event={"ID":"8051115f-1bf5-4043-b6b0-967d469c0d6a","Type":"ContainerStarted","Data":"417e30cadfff2d83c16a754675c70658d747d5e9344172742777a8a637cc2010"} Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.641693 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.647027 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" (UID: "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.672546 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" podStartSLOduration=11.672523086 podStartE2EDuration="11.672523086s" podCreationTimestamp="2025-12-09 12:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:50.488700171 +0000 UTC m=+1403.049181222" watchObservedRunningTime="2025-12-09 12:29:50.672523086 +0000 UTC m=+1403.233004147" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.686682 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6996656b77-zt25p" podStartSLOduration=11.686653850999999 podStartE2EDuration="11.686653851s" podCreationTimestamp="2025-12-09 12:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:50.529089522 +0000 UTC m=+1403.089570573" watchObservedRunningTime="2025-12-09 12:29:50.686653851 +0000 UTC m=+1403.247134912" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.689820 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" (UID: "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.709393 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bl6gs\" (UniqueName: \"kubernetes.io/projected/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-kube-api-access-bl6gs\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.709432 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.709446 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.717903 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6f965965cd-jt4tk" podStartSLOduration=6.7178840300000005 podStartE2EDuration="6.71788403s" podCreationTimestamp="2025-12-09 12:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:50.60327187 +0000 UTC m=+1403.163752921" watchObservedRunningTime="2025-12-09 12:29:50.71788403 +0000 UTC m=+1403.278365081" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.728059 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data" (OuterVolumeSpecName: "config-data") pod "d4e7db93-1016-47dd-a978-bef88b6592ae" (UID: "d4e7db93-1016-47dd-a978-bef88b6592ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.728153 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-657b4c4594-7x65f" podStartSLOduration=6.728129481 podStartE2EDuration="6.728129481s" podCreationTimestamp="2025-12-09 12:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:50.660705723 +0000 UTC m=+1403.221186784" watchObservedRunningTime="2025-12-09 12:29:50.728129481 +0000 UTC m=+1403.288610532" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.767223 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data" (OuterVolumeSpecName: "config-data") pod "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" (UID: "9f9344ba-cd60-48a8-bbe9-304b8fef2cb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.811710 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.811754 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e7db93-1016-47dd-a978-bef88b6592ae-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.882193 4970 scope.go:117] "RemoveContainer" containerID="5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547" Dec 09 12:29:50 crc kubenswrapper[4970]: E1209 12:29:50.886417 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547\": container with ID starting with 5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547 not found: ID does not exist" containerID="5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.886472 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547"} err="failed to get container status \"5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547\": rpc error: code = NotFound desc = could not find container \"5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547\": container with ID starting with 5752b985f9df2266d8ab3d863f3acca11bb1d4ab866de732658b0fb6062a2547 not found: ID does not exist" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.886501 4970 scope.go:117] "RemoveContainer" containerID="ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.956201 4970 scope.go:117] "RemoveContainer" containerID="ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63" Dec 09 12:29:50 crc kubenswrapper[4970]: E1209 12:29:50.956767 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63\": container with ID starting with ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63 not found: ID does not exist" containerID="ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.956799 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63"} err="failed to get container status \"ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63\": rpc error: code = NotFound desc = could not find container \"ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63\": container with ID starting with ffa78d3e5b3269980effefd040bb8ce6ec97ff6ab7ae56cb49a69d2e4b43ea63 not found: ID does not exist" Dec 09 12:29:50 crc kubenswrapper[4970]: I1209 12:29:50.963331 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6b9b88c557-ds6f2"] Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.007897 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6b9b88c557-ds6f2"] Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.028193 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5ccf77f85c-b5zqk"] Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.043674 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5ccf77f85c-b5zqk"] Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.657766 4970 generic.go:334] "Generic (PLEG): container finished" podID="57effba7-488e-4045-8da6-83bf8e4770d8" containerID="09a263206eb06787743c53007205aded7ccb5f029df44b25050e1d79fba08a9a" exitCode=1 Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.658155 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" event={"ID":"57effba7-488e-4045-8da6-83bf8e4770d8","Type":"ContainerDied","Data":"09a263206eb06787743c53007205aded7ccb5f029df44b25050e1d79fba08a9a"} Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.658964 4970 scope.go:117] "RemoveContainer" containerID="09a263206eb06787743c53007205aded7ccb5f029df44b25050e1d79fba08a9a" Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.666144 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8ff4d7fb5-kwj5g" event={"ID":"83510120-26cc-4c80-be70-788fc8d78ba2","Type":"ContainerStarted","Data":"afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400"} Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.667064 4970 scope.go:117] "RemoveContainer" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" Dec 09 12:29:51 crc kubenswrapper[4970]: E1209 12:29:51.667436 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8ff4d7fb5-kwj5g_openstack(83510120-26cc-4c80-be70-788fc8d78ba2)\"" pod="openstack/heat-api-8ff4d7fb5-kwj5g" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.678593 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerStarted","Data":"2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978"} Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.832419 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" path="/var/lib/kubelet/pods/9f9344ba-cd60-48a8-bbe9-304b8fef2cb0/volumes" Dec 09 12:29:51 crc kubenswrapper[4970]: I1209 12:29:51.833334 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e7db93-1016-47dd-a978-bef88b6592ae" path="/var/lib/kubelet/pods/d4e7db93-1016-47dd-a978-bef88b6592ae/volumes" Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.691379 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerStarted","Data":"7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9"} Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.693762 4970 generic.go:334] "Generic (PLEG): container finished" podID="57effba7-488e-4045-8da6-83bf8e4770d8" containerID="f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4" exitCode=1 Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.693829 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" event={"ID":"57effba7-488e-4045-8da6-83bf8e4770d8","Type":"ContainerDied","Data":"f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4"} Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.693877 4970 scope.go:117] "RemoveContainer" containerID="09a263206eb06787743c53007205aded7ccb5f029df44b25050e1d79fba08a9a" Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.694654 4970 scope.go:117] "RemoveContainer" containerID="f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4" Dec 09 12:29:52 crc kubenswrapper[4970]: E1209 12:29:52.694982 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8655ffb758-nbr8w_openstack(57effba7-488e-4045-8da6-83bf8e4770d8)\"" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.703841 4970 generic.go:334] "Generic (PLEG): container finished" podID="83510120-26cc-4c80-be70-788fc8d78ba2" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" exitCode=1 Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.705342 4970 scope.go:117] "RemoveContainer" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" Dec 09 12:29:52 crc kubenswrapper[4970]: E1209 12:29:52.705591 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8ff4d7fb5-kwj5g_openstack(83510120-26cc-4c80-be70-788fc8d78ba2)\"" pod="openstack/heat-api-8ff4d7fb5-kwj5g" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.705933 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8ff4d7fb5-kwj5g" event={"ID":"83510120-26cc-4c80-be70-788fc8d78ba2","Type":"ContainerDied","Data":"afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400"} Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.744827 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:29:52 crc kubenswrapper[4970]: I1209 12:29:52.808122 4970 scope.go:117] "RemoveContainer" containerID="40aa585fd338df96cf85d1982fd28ffda9b4acbd2f72214deaa8c650f483baa6" Dec 09 12:29:53 crc kubenswrapper[4970]: I1209 12:29:53.717125 4970 scope.go:117] "RemoveContainer" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" Dec 09 12:29:53 crc kubenswrapper[4970]: E1209 12:29:53.718558 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8ff4d7fb5-kwj5g_openstack(83510120-26cc-4c80-be70-788fc8d78ba2)\"" pod="openstack/heat-api-8ff4d7fb5-kwj5g" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" Dec 09 12:29:53 crc kubenswrapper[4970]: I1209 12:29:53.718750 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerStarted","Data":"6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2"} Dec 09 12:29:53 crc kubenswrapper[4970]: I1209 12:29:53.721355 4970 scope.go:117] "RemoveContainer" containerID="f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4" Dec 09 12:29:53 crc kubenswrapper[4970]: E1209 12:29:53.721646 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8655ffb758-nbr8w_openstack(57effba7-488e-4045-8da6-83bf8e4770d8)\"" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.054949 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-thg22"] Dec 09 12:29:54 crc kubenswrapper[4970]: E1209 12:29:54.056096 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" containerName="heat-api" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.056127 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" containerName="heat-api" Dec 09 12:29:54 crc kubenswrapper[4970]: E1209 12:29:54.056170 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e7db93-1016-47dd-a978-bef88b6592ae" containerName="heat-cfnapi" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.056179 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e7db93-1016-47dd-a978-bef88b6592ae" containerName="heat-cfnapi" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.056540 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e7db93-1016-47dd-a978-bef88b6592ae" containerName="heat-cfnapi" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.056567 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9344ba-cd60-48a8-bbe9-304b8fef2cb0" containerName="heat-api" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.057640 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.090101 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-thg22"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.166105 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-f4mbj"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.168075 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.190045 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f4mbj"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.205099 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhvw\" (UniqueName: \"kubernetes.io/projected/277c3243-bbe4-436e-a850-3619bcecc42a-kube-api-access-wkhvw\") pod \"nova-api-db-create-thg22\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.205206 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277c3243-bbe4-436e-a850-3619bcecc42a-operator-scripts\") pod \"nova-api-db-create-thg22\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.263725 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c370-account-create-update-kwdtv"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.266065 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.271558 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.276416 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c370-account-create-update-kwdtv"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.306911 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhvw\" (UniqueName: \"kubernetes.io/projected/277c3243-bbe4-436e-a850-3619bcecc42a-kube-api-access-wkhvw\") pod \"nova-api-db-create-thg22\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.307029 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277c3243-bbe4-436e-a850-3619bcecc42a-operator-scripts\") pod \"nova-api-db-create-thg22\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.307067 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213b2a4a-6575-4462-8637-09491c390553-operator-scripts\") pod \"nova-cell0-db-create-f4mbj\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.307145 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9pv8\" (UniqueName: \"kubernetes.io/projected/213b2a4a-6575-4462-8637-09491c390553-kube-api-access-j9pv8\") pod \"nova-cell0-db-create-f4mbj\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.307846 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277c3243-bbe4-436e-a850-3619bcecc42a-operator-scripts\") pod \"nova-api-db-create-thg22\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.352214 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhvw\" (UniqueName: \"kubernetes.io/projected/277c3243-bbe4-436e-a850-3619bcecc42a-kube-api-access-wkhvw\") pod \"nova-api-db-create-thg22\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.354960 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-z8cw7"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.356719 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.367890 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z8cw7"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.382149 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thg22" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.409618 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44bkv\" (UniqueName: \"kubernetes.io/projected/5375400c-350f-41f8-83ff-94071d6cc869-kube-api-access-44bkv\") pod \"nova-api-c370-account-create-update-kwdtv\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.409698 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9pv8\" (UniqueName: \"kubernetes.io/projected/213b2a4a-6575-4462-8637-09491c390553-kube-api-access-j9pv8\") pod \"nova-cell0-db-create-f4mbj\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.409767 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5375400c-350f-41f8-83ff-94071d6cc869-operator-scripts\") pod \"nova-api-c370-account-create-update-kwdtv\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.409890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213b2a4a-6575-4462-8637-09491c390553-operator-scripts\") pod \"nova-cell0-db-create-f4mbj\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.410548 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213b2a4a-6575-4462-8637-09491c390553-operator-scripts\") pod \"nova-cell0-db-create-f4mbj\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.434189 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9pv8\" (UniqueName: \"kubernetes.io/projected/213b2a4a-6575-4462-8637-09491c390553-kube-api-access-j9pv8\") pod \"nova-cell0-db-create-f4mbj\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.471784 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-ce27-account-create-update-5mndj"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.487813 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.491238 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.492595 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.517114 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5375400c-350f-41f8-83ff-94071d6cc869-operator-scripts\") pod \"nova-api-c370-account-create-update-kwdtv\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.519023 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5375400c-350f-41f8-83ff-94071d6cc869-operator-scripts\") pod \"nova-api-c370-account-create-update-kwdtv\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.519380 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44bkv\" (UniqueName: \"kubernetes.io/projected/5375400c-350f-41f8-83ff-94071d6cc869-kube-api-access-44bkv\") pod \"nova-api-c370-account-create-update-kwdtv\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.519441 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598xk\" (UniqueName: \"kubernetes.io/projected/f6427604-e1a0-4853-bd35-71a69164978f-kube-api-access-598xk\") pod \"nova-cell1-db-create-z8cw7\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.519545 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6427604-e1a0-4853-bd35-71a69164978f-operator-scripts\") pod \"nova-cell1-db-create-z8cw7\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.569320 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ce27-account-create-update-5mndj"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.596309 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44bkv\" (UniqueName: \"kubernetes.io/projected/5375400c-350f-41f8-83ff-94071d6cc869-kube-api-access-44bkv\") pod \"nova-api-c370-account-create-update-kwdtv\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.654722 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-598xk\" (UniqueName: \"kubernetes.io/projected/f6427604-e1a0-4853-bd35-71a69164978f-kube-api-access-598xk\") pod \"nova-cell1-db-create-z8cw7\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.654824 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6427604-e1a0-4853-bd35-71a69164978f-operator-scripts\") pod \"nova-cell1-db-create-z8cw7\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.654937 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-operator-scripts\") pod \"nova-cell0-ce27-account-create-update-5mndj\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.654983 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsnsj\" (UniqueName: \"kubernetes.io/projected/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-kube-api-access-wsnsj\") pod \"nova-cell0-ce27-account-create-update-5mndj\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.656086 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6427604-e1a0-4853-bd35-71a69164978f-operator-scripts\") pod \"nova-cell1-db-create-z8cw7\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.695370 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-3084-account-create-update-jgbjt"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.697263 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.697290 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-598xk\" (UniqueName: \"kubernetes.io/projected/f6427604-e1a0-4853-bd35-71a69164978f-kube-api-access-598xk\") pod \"nova-cell1-db-create-z8cw7\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.712452 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.749507 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3084-account-create-update-jgbjt"] Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.760835 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-operator-scripts\") pod \"nova-cell0-ce27-account-create-update-5mndj\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.760900 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsnsj\" (UniqueName: \"kubernetes.io/projected/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-kube-api-access-wsnsj\") pod \"nova-cell0-ce27-account-create-update-5mndj\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.762749 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-operator-scripts\") pod \"nova-cell0-ce27-account-create-update-5mndj\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.787062 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsnsj\" (UniqueName: \"kubernetes.io/projected/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-kube-api-access-wsnsj\") pod \"nova-cell0-ce27-account-create-update-5mndj\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.855069 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.867536 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.867928 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.868620 4970 scope.go:117] "RemoveContainer" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" Dec 09 12:29:54 crc kubenswrapper[4970]: E1209 12:29:54.868914 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8ff4d7fb5-kwj5g_openstack(83510120-26cc-4c80-be70-788fc8d78ba2)\"" pod="openstack/heat-api-8ff4d7fb5-kwj5g" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.868971 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stz8j\" (UniqueName: \"kubernetes.io/projected/b9e3f275-d39e-4777-a04a-ce4b2a642952-kube-api-access-stz8j\") pod \"nova-cell1-3084-account-create-update-jgbjt\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.869068 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e3f275-d39e-4777-a04a-ce4b2a642952-operator-scripts\") pod \"nova-cell1-3084-account-create-update-jgbjt\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.873482 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.874346 4970 scope.go:117] "RemoveContainer" containerID="f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4" Dec 09 12:29:54 crc kubenswrapper[4970]: E1209 12:29:54.874560 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8655ffb758-nbr8w_openstack(57effba7-488e-4045-8da6-83bf8e4770d8)\"" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.876050 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.893730 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.894003 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.971891 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stz8j\" (UniqueName: \"kubernetes.io/projected/b9e3f275-d39e-4777-a04a-ce4b2a642952-kube-api-access-stz8j\") pod \"nova-cell1-3084-account-create-update-jgbjt\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.972018 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e3f275-d39e-4777-a04a-ce4b2a642952-operator-scripts\") pod \"nova-cell1-3084-account-create-update-jgbjt\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:54 crc kubenswrapper[4970]: I1209 12:29:54.978054 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e3f275-d39e-4777-a04a-ce4b2a642952-operator-scripts\") pod \"nova-cell1-3084-account-create-update-jgbjt\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.022210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stz8j\" (UniqueName: \"kubernetes.io/projected/b9e3f275-d39e-4777-a04a-ce4b2a642952-kube-api-access-stz8j\") pod \"nova-cell1-3084-account-create-update-jgbjt\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.038093 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:29:55 crc kubenswrapper[4970]: W1209 12:29:55.141374 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod277c3243_bbe4_436e_a850_3619bcecc42a.slice/crio-b4441a54b50ece18396433fb2e6e5c2feb862a40bc4930089647167022af84df WatchSource:0}: Error finding container b4441a54b50ece18396433fb2e6e5c2feb862a40bc4930089647167022af84df: Status 404 returned error can't find the container with id b4441a54b50ece18396433fb2e6e5c2feb862a40bc4930089647167022af84df Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.147709 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-thg22"] Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.488456 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f4mbj"] Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.779714 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thg22" event={"ID":"277c3243-bbe4-436e-a850-3619bcecc42a","Type":"ContainerStarted","Data":"b4441a54b50ece18396433fb2e6e5c2feb862a40bc4930089647167022af84df"} Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.781282 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f4mbj" event={"ID":"213b2a4a-6575-4462-8637-09491c390553","Type":"ContainerStarted","Data":"94640e9dc825bfa9db37ddff7fc049d305b286967fa83bff389fe73bb893196a"} Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.782349 4970 scope.go:117] "RemoveContainer" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" Dec 09 12:29:55 crc kubenswrapper[4970]: I1209 12:29:55.782502 4970 scope.go:117] "RemoveContainer" containerID="f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4" Dec 09 12:29:55 crc kubenswrapper[4970]: E1209 12:29:55.782608 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8ff4d7fb5-kwj5g_openstack(83510120-26cc-4c80-be70-788fc8d78ba2)\"" pod="openstack/heat-api-8ff4d7fb5-kwj5g" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" Dec 09 12:29:55 crc kubenswrapper[4970]: E1209 12:29:55.782943 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8655ffb758-nbr8w_openstack(57effba7-488e-4045-8da6-83bf8e4770d8)\"" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" Dec 09 12:29:57 crc kubenswrapper[4970]: W1209 12:29:57.053591 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87f97ec0_b4d2_4a39_9fc2_dbe1a5791c17.slice/crio-4b56e22afa5670df6a29f3bfaca780e66412f95a521feb309b86a12e8ecee51a WatchSource:0}: Error finding container 4b56e22afa5670df6a29f3bfaca780e66412f95a521feb309b86a12e8ecee51a: Status 404 returned error can't find the container with id 4b56e22afa5670df6a29f3bfaca780e66412f95a521feb309b86a12e8ecee51a Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.055234 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-ce27-account-create-update-5mndj"] Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.238962 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.250810 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85bf8b6f7-tq5wr" Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.302784 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-z8cw7"] Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.425284 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-3084-account-create-update-jgbjt"] Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.801346 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c370-account-create-update-kwdtv"] Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.894891 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thg22" event={"ID":"277c3243-bbe4-436e-a850-3619bcecc42a","Type":"ContainerStarted","Data":"8e7e02f2a80fc8868e889e1a05bfc91d9ff57befb9874c28d33f3be135b24f8d"} Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.900688 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f4mbj" event={"ID":"213b2a4a-6575-4462-8637-09491c390553","Type":"ContainerStarted","Data":"bf9fbe2859747fd62421d9f31218e353518c50dd55e1fd78f13ec74a0ec726b9"} Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.912137 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerStarted","Data":"f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf"} Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.913703 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.926370 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.927406 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" event={"ID":"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17","Type":"ContainerStarted","Data":"f93fccb5021fdf96c86298a57962ad656b1caa09ca0aaa21f850a136d35da43b"} Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.927458 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" event={"ID":"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17","Type":"ContainerStarted","Data":"4b56e22afa5670df6a29f3bfaca780e66412f95a521feb309b86a12e8ecee51a"} Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.930618 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" event={"ID":"b9e3f275-d39e-4777-a04a-ce4b2a642952","Type":"ContainerStarted","Data":"13db9f10db71927c2f73239f5a9e48c45ed4ff5ae8f5b85fdf9812bc39056926"} Dec 09 12:29:57 crc kubenswrapper[4970]: I1209 12:29:57.943827 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z8cw7" event={"ID":"f6427604-e1a0-4853-bd35-71a69164978f","Type":"ContainerStarted","Data":"e505cd9be1c6020f11f60332a6de7e0bc5ea725a3e7032a1f53b878a280b3782"} Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.107535 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-f4mbj" podStartSLOduration=4.107518051 podStartE2EDuration="4.107518051s" podCreationTimestamp="2025-12-09 12:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:58.051402852 +0000 UTC m=+1410.611883903" watchObservedRunningTime="2025-12-09 12:29:58.107518051 +0000 UTC m=+1410.667999102" Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.150952 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-thg22" podStartSLOduration=4.150930442 podStartE2EDuration="4.150930442s" podCreationTimestamp="2025-12-09 12:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:58.097505375 +0000 UTC m=+1410.657986426" watchObservedRunningTime="2025-12-09 12:29:58.150930442 +0000 UTC m=+1410.711411483" Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.249499 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" podStartSLOduration=4.249476316 podStartE2EDuration="4.249476316s" podCreationTimestamp="2025-12-09 12:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:58.135622296 +0000 UTC m=+1410.696103347" watchObservedRunningTime="2025-12-09 12:29:58.249476316 +0000 UTC m=+1410.809957367" Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.279423 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.996684239 podStartE2EDuration="10.27940466s" podCreationTimestamp="2025-12-09 12:29:48 +0000 UTC" firstStartedPulling="2025-12-09 12:29:50.059364543 +0000 UTC m=+1402.619845594" lastFinishedPulling="2025-12-09 12:29:56.342084964 +0000 UTC m=+1408.902566015" observedRunningTime="2025-12-09 12:29:58.160834175 +0000 UTC m=+1410.721315236" watchObservedRunningTime="2025-12-09 12:29:58.27940466 +0000 UTC m=+1410.839885711" Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.281297 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7hh4d"] Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.281531 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerName="dnsmasq-dns" containerID="cri-o://2c1629959448a932d8c5f58b5a90152025aec271a215042fdb9fc1828c3dc759" gracePeriod=10 Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.988929 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c370-account-create-update-kwdtv" event={"ID":"5375400c-350f-41f8-83ff-94071d6cc869","Type":"ContainerStarted","Data":"84fbb57175c0d463209d0a524372b3c1c0af13759723ff20c3f048333747168f"} Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.989681 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c370-account-create-update-kwdtv" event={"ID":"5375400c-350f-41f8-83ff-94071d6cc869","Type":"ContainerStarted","Data":"0ed5ee7cc1313741fc3bf9af2b11091c3e1803e41e97352d1bd1a35cfacb9af0"} Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.992085 4970 generic.go:334] "Generic (PLEG): container finished" podID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerID="2c1629959448a932d8c5f58b5a90152025aec271a215042fdb9fc1828c3dc759" exitCode=0 Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.992154 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" event={"ID":"5179fa9e-e5d0-4665-8356-ca6026fe2d64","Type":"ContainerDied","Data":"2c1629959448a932d8c5f58b5a90152025aec271a215042fdb9fc1828c3dc759"} Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.997815 4970 generic.go:334] "Generic (PLEG): container finished" podID="87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" containerID="f93fccb5021fdf96c86298a57962ad656b1caa09ca0aaa21f850a136d35da43b" exitCode=0 Dec 09 12:29:58 crc kubenswrapper[4970]: I1209 12:29:58.998070 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" event={"ID":"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17","Type":"ContainerDied","Data":"f93fccb5021fdf96c86298a57962ad656b1caa09ca0aaa21f850a136d35da43b"} Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.007504 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" event={"ID":"b9e3f275-d39e-4777-a04a-ce4b2a642952","Type":"ContainerStarted","Data":"3d2f114427901867ba62faf64317031624151299301e22f9e5a44dd5c6fc399e"} Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.013917 4970 generic.go:334] "Generic (PLEG): container finished" podID="f6427604-e1a0-4853-bd35-71a69164978f" containerID="d1cd7182f2627bd3a1de015d7f9bba4b971edd55e52e9a52987550cb0669d194" exitCode=0 Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.014812 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z8cw7" event={"ID":"f6427604-e1a0-4853-bd35-71a69164978f","Type":"ContainerDied","Data":"d1cd7182f2627bd3a1de015d7f9bba4b971edd55e52e9a52987550cb0669d194"} Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.023626 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c370-account-create-update-kwdtv" podStartSLOduration=5.023609568 podStartE2EDuration="5.023609568s" podCreationTimestamp="2025-12-09 12:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:29:59.018733019 +0000 UTC m=+1411.579214070" watchObservedRunningTime="2025-12-09 12:29:59.023609568 +0000 UTC m=+1411.584090619" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.031538 4970 generic.go:334] "Generic (PLEG): container finished" podID="277c3243-bbe4-436e-a850-3619bcecc42a" containerID="8e7e02f2a80fc8868e889e1a05bfc91d9ff57befb9874c28d33f3be135b24f8d" exitCode=0 Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.031595 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thg22" event={"ID":"277c3243-bbe4-436e-a850-3619bcecc42a","Type":"ContainerDied","Data":"8e7e02f2a80fc8868e889e1a05bfc91d9ff57befb9874c28d33f3be135b24f8d"} Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.048605 4970 generic.go:334] "Generic (PLEG): container finished" podID="213b2a4a-6575-4462-8637-09491c390553" containerID="bf9fbe2859747fd62421d9f31218e353518c50dd55e1fd78f13ec74a0ec726b9" exitCode=0 Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.054195 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f4mbj" event={"ID":"213b2a4a-6575-4462-8637-09491c390553","Type":"ContainerDied","Data":"bf9fbe2859747fd62421d9f31218e353518c50dd55e1fd78f13ec74a0ec726b9"} Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.319581 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.466628 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-sb\") pod \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.466850 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-svc\") pod \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.466981 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qxrr\" (UniqueName: \"kubernetes.io/projected/5179fa9e-e5d0-4665-8356-ca6026fe2d64-kube-api-access-2qxrr\") pod \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.467054 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-nb\") pod \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.467693 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-config\") pod \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.467763 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-swift-storage-0\") pod \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\" (UID: \"5179fa9e-e5d0-4665-8356-ca6026fe2d64\") " Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.501813 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5179fa9e-e5d0-4665-8356-ca6026fe2d64-kube-api-access-2qxrr" (OuterVolumeSpecName: "kube-api-access-2qxrr") pod "5179fa9e-e5d0-4665-8356-ca6026fe2d64" (UID: "5179fa9e-e5d0-4665-8356-ca6026fe2d64"). InnerVolumeSpecName "kube-api-access-2qxrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.562222 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5179fa9e-e5d0-4665-8356-ca6026fe2d64" (UID: "5179fa9e-e5d0-4665-8356-ca6026fe2d64"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.571338 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5179fa9e-e5d0-4665-8356-ca6026fe2d64" (UID: "5179fa9e-e5d0-4665-8356-ca6026fe2d64"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.573344 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qxrr\" (UniqueName: \"kubernetes.io/projected/5179fa9e-e5d0-4665-8356-ca6026fe2d64-kube-api-access-2qxrr\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.573373 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.573386 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.591723 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-config" (OuterVolumeSpecName: "config") pod "5179fa9e-e5d0-4665-8356-ca6026fe2d64" (UID: "5179fa9e-e5d0-4665-8356-ca6026fe2d64"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.600685 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5179fa9e-e5d0-4665-8356-ca6026fe2d64" (UID: "5179fa9e-e5d0-4665-8356-ca6026fe2d64"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.616933 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5179fa9e-e5d0-4665-8356-ca6026fe2d64" (UID: "5179fa9e-e5d0-4665-8356-ca6026fe2d64"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.665586 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-657b4c4594-7x65f" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.675140 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.675178 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.675186 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5179fa9e-e5d0-4665-8356-ca6026fe2d64-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.730588 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8655ffb758-nbr8w"] Dec 09 12:29:59 crc kubenswrapper[4970]: I1209 12:29:59.957459 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6996656b77-zt25p" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.051466 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8679c95d76-9dntl"] Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.051710 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-8679c95d76-9dntl" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerName="heat-engine" containerID="cri-o://a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" gracePeriod=60 Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.078987 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.080054 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-7hh4d" event={"ID":"5179fa9e-e5d0-4665-8356-ca6026fe2d64","Type":"ContainerDied","Data":"e296d3378dece591bdb82103197c2e3648196eeec162abceca0ab3358c7cecd4"} Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.080104 4970 scope.go:117] "RemoveContainer" containerID="2c1629959448a932d8c5f58b5a90152025aec271a215042fdb9fc1828c3dc759" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.095643 4970 generic.go:334] "Generic (PLEG): container finished" podID="b9e3f275-d39e-4777-a04a-ce4b2a642952" containerID="3d2f114427901867ba62faf64317031624151299301e22f9e5a44dd5c6fc399e" exitCode=0 Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.095961 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" event={"ID":"b9e3f275-d39e-4777-a04a-ce4b2a642952","Type":"ContainerDied","Data":"3d2f114427901867ba62faf64317031624151299301e22f9e5a44dd5c6fc399e"} Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.102153 4970 generic.go:334] "Generic (PLEG): container finished" podID="5375400c-350f-41f8-83ff-94071d6cc869" containerID="84fbb57175c0d463209d0a524372b3c1c0af13759723ff20c3f048333747168f" exitCode=0 Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.102978 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c370-account-create-update-kwdtv" event={"ID":"5375400c-350f-41f8-83ff-94071d6cc869","Type":"ContainerDied","Data":"84fbb57175c0d463209d0a524372b3c1c0af13759723ff20c3f048333747168f"} Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.228613 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.243391 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2"] Dec 09 12:30:00 crc kubenswrapper[4970]: E1209 12:30:00.244078 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" containerName="heat-cfnapi" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244106 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" containerName="heat-cfnapi" Dec 09 12:30:00 crc kubenswrapper[4970]: E1209 12:30:00.244129 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerName="init" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244137 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerName="init" Dec 09 12:30:00 crc kubenswrapper[4970]: E1209 12:30:00.244158 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" containerName="heat-cfnapi" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244166 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" containerName="heat-cfnapi" Dec 09 12:30:00 crc kubenswrapper[4970]: E1209 12:30:00.244201 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerName="dnsmasq-dns" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244208 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerName="dnsmasq-dns" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244502 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" containerName="heat-cfnapi" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244523 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" containerName="dnsmasq-dns" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.244544 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" containerName="heat-cfnapi" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.245628 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.251833 4970 scope.go:117] "RemoveContainer" containerID="cab81b9c3a5afd717961bb6a2b5497c5a1a2a053f49609ee339d70a88dbb6657" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.252687 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.259537 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.283004 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2"] Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.309322 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7hh4d"] Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.328018 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-7hh4d"] Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.394529 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dzm5\" (UniqueName: \"kubernetes.io/projected/57effba7-488e-4045-8da6-83bf8e4770d8-kube-api-access-8dzm5\") pod \"57effba7-488e-4045-8da6-83bf8e4770d8\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.394599 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data\") pod \"57effba7-488e-4045-8da6-83bf8e4770d8\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.394718 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data-custom\") pod \"57effba7-488e-4045-8da6-83bf8e4770d8\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.394996 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-combined-ca-bundle\") pod \"57effba7-488e-4045-8da6-83bf8e4770d8\" (UID: \"57effba7-488e-4045-8da6-83bf8e4770d8\") " Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.395568 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2587s\" (UniqueName: \"kubernetes.io/projected/59978ade-1525-4f29-908f-026970955862-kube-api-access-2587s\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.395679 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59978ade-1525-4f29-908f-026970955862-secret-volume\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.395791 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59978ade-1525-4f29-908f-026970955862-config-volume\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.407647 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "57effba7-488e-4045-8da6-83bf8e4770d8" (UID: "57effba7-488e-4045-8da6-83bf8e4770d8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.407750 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57effba7-488e-4045-8da6-83bf8e4770d8-kube-api-access-8dzm5" (OuterVolumeSpecName: "kube-api-access-8dzm5") pod "57effba7-488e-4045-8da6-83bf8e4770d8" (UID: "57effba7-488e-4045-8da6-83bf8e4770d8"). InnerVolumeSpecName "kube-api-access-8dzm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.447330 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57effba7-488e-4045-8da6-83bf8e4770d8" (UID: "57effba7-488e-4045-8da6-83bf8e4770d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.497941 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2587s\" (UniqueName: \"kubernetes.io/projected/59978ade-1525-4f29-908f-026970955862-kube-api-access-2587s\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.498081 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59978ade-1525-4f29-908f-026970955862-secret-volume\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.498198 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59978ade-1525-4f29-908f-026970955862-config-volume\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.498442 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dzm5\" (UniqueName: \"kubernetes.io/projected/57effba7-488e-4045-8da6-83bf8e4770d8-kube-api-access-8dzm5\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.498460 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.498473 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.499580 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59978ade-1525-4f29-908f-026970955862-config-volume\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.501399 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data" (OuterVolumeSpecName: "config-data") pod "57effba7-488e-4045-8da6-83bf8e4770d8" (UID: "57effba7-488e-4045-8da6-83bf8e4770d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.515767 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59978ade-1525-4f29-908f-026970955862-secret-volume\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.554490 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2587s\" (UniqueName: \"kubernetes.io/projected/59978ade-1525-4f29-908f-026970955862-kube-api-access-2587s\") pod \"collect-profiles-29421390-2f5q2\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.602831 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.606300 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57effba7-488e-4045-8da6-83bf8e4770d8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:00 crc kubenswrapper[4970]: I1209 12:30:00.854068 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.021073 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e3f275-d39e-4777-a04a-ce4b2a642952-operator-scripts\") pod \"b9e3f275-d39e-4777-a04a-ce4b2a642952\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.021206 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stz8j\" (UniqueName: \"kubernetes.io/projected/b9e3f275-d39e-4777-a04a-ce4b2a642952-kube-api-access-stz8j\") pod \"b9e3f275-d39e-4777-a04a-ce4b2a642952\" (UID: \"b9e3f275-d39e-4777-a04a-ce4b2a642952\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.023413 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9e3f275-d39e-4777-a04a-ce4b2a642952-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b9e3f275-d39e-4777-a04a-ce4b2a642952" (UID: "b9e3f275-d39e-4777-a04a-ce4b2a642952"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.056801 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9e3f275-d39e-4777-a04a-ce4b2a642952-kube-api-access-stz8j" (OuterVolumeSpecName: "kube-api-access-stz8j") pod "b9e3f275-d39e-4777-a04a-ce4b2a642952" (UID: "b9e3f275-d39e-4777-a04a-ce4b2a642952"). InnerVolumeSpecName "kube-api-access-stz8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.123826 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b9e3f275-d39e-4777-a04a-ce4b2a642952-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.123859 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stz8j\" (UniqueName: \"kubernetes.io/projected/b9e3f275-d39e-4777-a04a-ce4b2a642952-kube-api-access-stz8j\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.257428 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" event={"ID":"b9e3f275-d39e-4777-a04a-ce4b2a642952","Type":"ContainerDied","Data":"13db9f10db71927c2f73239f5a9e48c45ed4ff5ae8f5b85fdf9812bc39056926"} Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.267553 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13db9f10db71927c2f73239f5a9e48c45ed4ff5ae8f5b85fdf9812bc39056926" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.267703 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-3084-account-create-update-jgbjt" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.298640 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.298732 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8655ffb758-nbr8w" event={"ID":"57effba7-488e-4045-8da6-83bf8e4770d8","Type":"ContainerDied","Data":"7b69d176d4f659a9e9f34515e3239b94d1a0a544f6a3630d1f924eb38c323cef"} Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.298819 4970 scope.go:117] "RemoveContainer" containerID="f0b60d4a0ffb89f007bb9256d9a207a727903ac5d509389e94e76d44b48b91f4" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.344653 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.441920 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thg22" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.451181 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277c3243-bbe4-436e-a850-3619bcecc42a-operator-scripts\") pod \"277c3243-bbe4-436e-a850-3619bcecc42a\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.451309 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598xk\" (UniqueName: \"kubernetes.io/projected/f6427604-e1a0-4853-bd35-71a69164978f-kube-api-access-598xk\") pod \"f6427604-e1a0-4853-bd35-71a69164978f\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.451415 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkhvw\" (UniqueName: \"kubernetes.io/projected/277c3243-bbe4-436e-a850-3619bcecc42a-kube-api-access-wkhvw\") pod \"277c3243-bbe4-436e-a850-3619bcecc42a\" (UID: \"277c3243-bbe4-436e-a850-3619bcecc42a\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.451446 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6427604-e1a0-4853-bd35-71a69164978f-operator-scripts\") pod \"f6427604-e1a0-4853-bd35-71a69164978f\" (UID: \"f6427604-e1a0-4853-bd35-71a69164978f\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.455238 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6427604-e1a0-4853-bd35-71a69164978f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6427604-e1a0-4853-bd35-71a69164978f" (UID: "f6427604-e1a0-4853-bd35-71a69164978f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.455316 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/277c3243-bbe4-436e-a850-3619bcecc42a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "277c3243-bbe4-436e-a850-3619bcecc42a" (UID: "277c3243-bbe4-436e-a850-3619bcecc42a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.455800 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6427604-e1a0-4853-bd35-71a69164978f-kube-api-access-598xk" (OuterVolumeSpecName: "kube-api-access-598xk") pod "f6427604-e1a0-4853-bd35-71a69164978f" (UID: "f6427604-e1a0-4853-bd35-71a69164978f"). InnerVolumeSpecName "kube-api-access-598xk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.462357 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8655ffb758-nbr8w"] Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.464812 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/277c3243-bbe4-436e-a850-3619bcecc42a-kube-api-access-wkhvw" (OuterVolumeSpecName: "kube-api-access-wkhvw") pod "277c3243-bbe4-436e-a850-3619bcecc42a" (UID: "277c3243-bbe4-436e-a850-3619bcecc42a"). InnerVolumeSpecName "kube-api-access-wkhvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.561596 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-8655ffb758-nbr8w"] Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.571375 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/277c3243-bbe4-436e-a850-3619bcecc42a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.571438 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-598xk\" (UniqueName: \"kubernetes.io/projected/f6427604-e1a0-4853-bd35-71a69164978f-kube-api-access-598xk\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.571456 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkhvw\" (UniqueName: \"kubernetes.io/projected/277c3243-bbe4-436e-a850-3619bcecc42a-kube-api-access-wkhvw\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.571477 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6427604-e1a0-4853-bd35-71a69164978f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.850163 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5179fa9e-e5d0-4665-8356-ca6026fe2d64" path="/var/lib/kubelet/pods/5179fa9e-e5d0-4665-8356-ca6026fe2d64/volumes" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.851097 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57effba7-488e-4045-8da6-83bf8e4770d8" path="/var/lib/kubelet/pods/57effba7-488e-4045-8da6-83bf8e4770d8/volumes" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.853486 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.864871 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.982731 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9pv8\" (UniqueName: \"kubernetes.io/projected/213b2a4a-6575-4462-8637-09491c390553-kube-api-access-j9pv8\") pod \"213b2a4a-6575-4462-8637-09491c390553\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.983001 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213b2a4a-6575-4462-8637-09491c390553-operator-scripts\") pod \"213b2a4a-6575-4462-8637-09491c390553\" (UID: \"213b2a4a-6575-4462-8637-09491c390553\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.983084 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-operator-scripts\") pod \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.983153 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsnsj\" (UniqueName: \"kubernetes.io/projected/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-kube-api-access-wsnsj\") pod \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\" (UID: \"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17\") " Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.984692 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/213b2a4a-6575-4462-8637-09491c390553-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "213b2a4a-6575-4462-8637-09491c390553" (UID: "213b2a4a-6575-4462-8637-09491c390553"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.985151 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" (UID: "87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.992108 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213b2a4a-6575-4462-8637-09491c390553-kube-api-access-j9pv8" (OuterVolumeSpecName: "kube-api-access-j9pv8") pod "213b2a4a-6575-4462-8637-09491c390553" (UID: "213b2a4a-6575-4462-8637-09491c390553"). InnerVolumeSpecName "kube-api-access-j9pv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:01 crc kubenswrapper[4970]: I1209 12:30:01.994084 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-kube-api-access-wsnsj" (OuterVolumeSpecName: "kube-api-access-wsnsj") pod "87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" (UID: "87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17"). InnerVolumeSpecName "kube-api-access-wsnsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.091543 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9pv8\" (UniqueName: \"kubernetes.io/projected/213b2a4a-6575-4462-8637-09491c390553-kube-api-access-j9pv8\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.091580 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/213b2a4a-6575-4462-8637-09491c390553-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.091594 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.091607 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsnsj\" (UniqueName: \"kubernetes.io/projected/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17-kube-api-access-wsnsj\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.142613 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2"] Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.199989 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.320946 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c370-account-create-update-kwdtv" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.320956 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c370-account-create-update-kwdtv" event={"ID":"5375400c-350f-41f8-83ff-94071d6cc869","Type":"ContainerDied","Data":"0ed5ee7cc1313741fc3bf9af2b11091c3e1803e41e97352d1bd1a35cfacb9af0"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.320987 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed5ee7cc1313741fc3bf9af2b11091c3e1803e41e97352d1bd1a35cfacb9af0" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.326713 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"26afaabd-9309-47db-a9fd-282425d0c44e","Type":"ContainerStarted","Data":"e68dde768439ae51008a8bcd613ca1842c06972b434dd4c67a14b8921d8061f4"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.334358 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.334381 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-ce27-account-create-update-5mndj" event={"ID":"87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17","Type":"ContainerDied","Data":"4b56e22afa5670df6a29f3bfaca780e66412f95a521feb309b86a12e8ecee51a"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.334428 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b56e22afa5670df6a29f3bfaca780e66412f95a521feb309b86a12e8ecee51a" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.348895 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" event={"ID":"59978ade-1525-4f29-908f-026970955862","Type":"ContainerStarted","Data":"a2615271279e1176418d54a1df52e8da9ec7cfa34206f37b82f44d43cfb70ae2"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.349804 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.33068979 podStartE2EDuration="35.349785521s" podCreationTimestamp="2025-12-09 12:29:27 +0000 UTC" firstStartedPulling="2025-12-09 12:29:28.499000691 +0000 UTC m=+1381.059481742" lastFinishedPulling="2025-12-09 12:30:01.518096422 +0000 UTC m=+1414.078577473" observedRunningTime="2025-12-09 12:30:02.348668261 +0000 UTC m=+1414.909149322" watchObservedRunningTime="2025-12-09 12:30:02.349785521 +0000 UTC m=+1414.910266572" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.354324 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-z8cw7" event={"ID":"f6427604-e1a0-4853-bd35-71a69164978f","Type":"ContainerDied","Data":"e505cd9be1c6020f11f60332a6de7e0bc5ea725a3e7032a1f53b878a280b3782"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.354357 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e505cd9be1c6020f11f60332a6de7e0bc5ea725a3e7032a1f53b878a280b3782" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.354430 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-z8cw7" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.371075 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-thg22" event={"ID":"277c3243-bbe4-436e-a850-3619bcecc42a","Type":"ContainerDied","Data":"b4441a54b50ece18396433fb2e6e5c2feb862a40bc4930089647167022af84df"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.371109 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4441a54b50ece18396433fb2e6e5c2feb862a40bc4930089647167022af84df" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.371180 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-thg22" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.391940 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f4mbj" event={"ID":"213b2a4a-6575-4462-8637-09491c390553","Type":"ContainerDied","Data":"94640e9dc825bfa9db37ddff7fc049d305b286967fa83bff389fe73bb893196a"} Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.391979 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94640e9dc825bfa9db37ddff7fc049d305b286967fa83bff389fe73bb893196a" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.392038 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f4mbj" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.398087 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bkv\" (UniqueName: \"kubernetes.io/projected/5375400c-350f-41f8-83ff-94071d6cc869-kube-api-access-44bkv\") pod \"5375400c-350f-41f8-83ff-94071d6cc869\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.398131 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5375400c-350f-41f8-83ff-94071d6cc869-operator-scripts\") pod \"5375400c-350f-41f8-83ff-94071d6cc869\" (UID: \"5375400c-350f-41f8-83ff-94071d6cc869\") " Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.404585 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5375400c-350f-41f8-83ff-94071d6cc869-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5375400c-350f-41f8-83ff-94071d6cc869" (UID: "5375400c-350f-41f8-83ff-94071d6cc869"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.409590 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5375400c-350f-41f8-83ff-94071d6cc869-kube-api-access-44bkv" (OuterVolumeSpecName: "kube-api-access-44bkv") pod "5375400c-350f-41f8-83ff-94071d6cc869" (UID: "5375400c-350f-41f8-83ff-94071d6cc869"). InnerVolumeSpecName "kube-api-access-44bkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.500493 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44bkv\" (UniqueName: \"kubernetes.io/projected/5375400c-350f-41f8-83ff-94071d6cc869-kube-api-access-44bkv\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.500780 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5375400c-350f-41f8-83ff-94071d6cc869-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:02 crc kubenswrapper[4970]: E1209 12:30:02.698279 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 12:30:02 crc kubenswrapper[4970]: E1209 12:30:02.708665 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 12:30:02 crc kubenswrapper[4970]: E1209 12:30:02.710667 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 12:30:02 crc kubenswrapper[4970]: E1209 12:30:02.710746 4970 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8679c95d76-9dntl" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerName="heat-engine" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.764233 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6f965965cd-jt4tk" Dec 09 12:30:02 crc kubenswrapper[4970]: I1209 12:30:02.860015 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8ff4d7fb5-kwj5g"] Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.428046 4970 generic.go:334] "Generic (PLEG): container finished" podID="59978ade-1525-4f29-908f-026970955862" containerID="1a3152fd12a89b1ff62feb46e9c0713323d18ae205387f4679e6d5a4dc1e77cb" exitCode=0 Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.428158 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" event={"ID":"59978ade-1525-4f29-908f-026970955862","Type":"ContainerDied","Data":"1a3152fd12a89b1ff62feb46e9c0713323d18ae205387f4679e6d5a4dc1e77cb"} Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.430903 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8ff4d7fb5-kwj5g" event={"ID":"83510120-26cc-4c80-be70-788fc8d78ba2","Type":"ContainerDied","Data":"1934901639e76c7b4339d909d5dca9a97f32917cabc57cc24aa40bc4d483a072"} Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.430938 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1934901639e76c7b4339d909d5dca9a97f32917cabc57cc24aa40bc4d483a072" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.463277 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.469977 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-combined-ca-bundle\") pod \"83510120-26cc-4c80-be70-788fc8d78ba2\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.470133 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84qk6\" (UniqueName: \"kubernetes.io/projected/83510120-26cc-4c80-be70-788fc8d78ba2-kube-api-access-84qk6\") pod \"83510120-26cc-4c80-be70-788fc8d78ba2\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.470287 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data-custom\") pod \"83510120-26cc-4c80-be70-788fc8d78ba2\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.470326 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data\") pod \"83510120-26cc-4c80-be70-788fc8d78ba2\" (UID: \"83510120-26cc-4c80-be70-788fc8d78ba2\") " Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.491029 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83510120-26cc-4c80-be70-788fc8d78ba2-kube-api-access-84qk6" (OuterVolumeSpecName: "kube-api-access-84qk6") pod "83510120-26cc-4c80-be70-788fc8d78ba2" (UID: "83510120-26cc-4c80-be70-788fc8d78ba2"). InnerVolumeSpecName "kube-api-access-84qk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.497385 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83510120-26cc-4c80-be70-788fc8d78ba2" (UID: "83510120-26cc-4c80-be70-788fc8d78ba2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.563616 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data" (OuterVolumeSpecName: "config-data") pod "83510120-26cc-4c80-be70-788fc8d78ba2" (UID: "83510120-26cc-4c80-be70-788fc8d78ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.572617 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84qk6\" (UniqueName: \"kubernetes.io/projected/83510120-26cc-4c80-be70-788fc8d78ba2-kube-api-access-84qk6\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.572655 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.572667 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.578616 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83510120-26cc-4c80-be70-788fc8d78ba2" (UID: "83510120-26cc-4c80-be70-788fc8d78ba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.678376 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83510120-26cc-4c80-be70-788fc8d78ba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.831559 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.831812 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-log" containerID="cri-o://4219a62cfad34d1ae3a86959f52cac5bb37112c707c4a6863f78678a997f452f" gracePeriod=30 Dec 09 12:30:03 crc kubenswrapper[4970]: I1209 12:30:03.834259 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-httpd" containerID="cri-o://7f76c34720835c9cf6966f4890b4c1b12dbdf9fa01d95785d94d81788022bbbc" gracePeriod=30 Dec 09 12:30:04 crc kubenswrapper[4970]: I1209 12:30:04.443336 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerID="4219a62cfad34d1ae3a86959f52cac5bb37112c707c4a6863f78678a997f452f" exitCode=143 Dec 09 12:30:04 crc kubenswrapper[4970]: I1209 12:30:04.443414 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b28cb7f-3918-4f2d-ba79-21503540a126","Type":"ContainerDied","Data":"4219a62cfad34d1ae3a86959f52cac5bb37112c707c4a6863f78678a997f452f"} Dec 09 12:30:04 crc kubenswrapper[4970]: I1209 12:30:04.443434 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8ff4d7fb5-kwj5g" Dec 09 12:30:04 crc kubenswrapper[4970]: I1209 12:30:04.485535 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8ff4d7fb5-kwj5g"] Dec 09 12:30:04 crc kubenswrapper[4970]: I1209 12:30:04.499353 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8ff4d7fb5-kwj5g"] Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.033507 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8kxt"] Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034018 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9e3f275-d39e-4777-a04a-ce4b2a642952" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034031 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9e3f275-d39e-4777-a04a-ce4b2a642952" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034050 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5375400c-350f-41f8-83ff-94071d6cc869" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034058 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5375400c-350f-41f8-83ff-94071d6cc869" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034078 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034086 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034101 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" containerName="heat-api" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034107 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" containerName="heat-api" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034119 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6427604-e1a0-4853-bd35-71a69164978f" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034124 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6427604-e1a0-4853-bd35-71a69164978f" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034134 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213b2a4a-6575-4462-8637-09491c390553" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034140 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="213b2a4a-6575-4462-8637-09491c390553" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034151 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" containerName="heat-api" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034157 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" containerName="heat-api" Dec 09 12:30:05 crc kubenswrapper[4970]: E1209 12:30:05.034179 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="277c3243-bbe4-436e-a850-3619bcecc42a" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034185 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="277c3243-bbe4-436e-a850-3619bcecc42a" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034468 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6427604-e1a0-4853-bd35-71a69164978f" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034488 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="5375400c-350f-41f8-83ff-94071d6cc869" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034500 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034513 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9e3f275-d39e-4777-a04a-ce4b2a642952" containerName="mariadb-account-create-update" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034524 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" containerName="heat-api" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034536 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" containerName="heat-api" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034548 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="277c3243-bbe4-436e-a850-3619bcecc42a" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.034557 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="213b2a4a-6575-4462-8637-09491c390553" containerName="mariadb-database-create" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.035302 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.049029 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vrvng" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.049697 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.050197 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.073320 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8kxt"] Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.123019 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqzfk\" (UniqueName: \"kubernetes.io/projected/92e5909b-9f51-4a80-824b-b633efbed63b-kube-api-access-zqzfk\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.123105 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-config-data\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.123207 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-scripts\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.123444 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.225954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.226062 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqzfk\" (UniqueName: \"kubernetes.io/projected/92e5909b-9f51-4a80-824b-b633efbed63b-kube-api-access-zqzfk\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.226145 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-config-data\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.226236 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-scripts\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.233304 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-config-data\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.236989 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-scripts\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.237813 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.249753 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqzfk\" (UniqueName: \"kubernetes.io/projected/92e5909b-9f51-4a80-824b-b633efbed63b-kube-api-access-zqzfk\") pod \"nova-cell0-conductor-db-sync-m8kxt\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.335851 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.381255 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.430881 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2587s\" (UniqueName: \"kubernetes.io/projected/59978ade-1525-4f29-908f-026970955862-kube-api-access-2587s\") pod \"59978ade-1525-4f29-908f-026970955862\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.430967 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59978ade-1525-4f29-908f-026970955862-secret-volume\") pod \"59978ade-1525-4f29-908f-026970955862\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.431192 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59978ade-1525-4f29-908f-026970955862-config-volume\") pod \"59978ade-1525-4f29-908f-026970955862\" (UID: \"59978ade-1525-4f29-908f-026970955862\") " Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.432635 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59978ade-1525-4f29-908f-026970955862-config-volume" (OuterVolumeSpecName: "config-volume") pod "59978ade-1525-4f29-908f-026970955862" (UID: "59978ade-1525-4f29-908f-026970955862"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.451411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59978ade-1525-4f29-908f-026970955862-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "59978ade-1525-4f29-908f-026970955862" (UID: "59978ade-1525-4f29-908f-026970955862"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.499917 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59978ade-1525-4f29-908f-026970955862-kube-api-access-2587s" (OuterVolumeSpecName: "kube-api-access-2587s") pod "59978ade-1525-4f29-908f-026970955862" (UID: "59978ade-1525-4f29-908f-026970955862"). InnerVolumeSpecName "kube-api-access-2587s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.509153 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" event={"ID":"59978ade-1525-4f29-908f-026970955862","Type":"ContainerDied","Data":"a2615271279e1176418d54a1df52e8da9ec7cfa34206f37b82f44d43cfb70ae2"} Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.509235 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2615271279e1176418d54a1df52e8da9ec7cfa34206f37b82f44d43cfb70ae2" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.509408 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.549152 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59978ade-1525-4f29-908f-026970955862-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.549187 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2587s\" (UniqueName: \"kubernetes.io/projected/59978ade-1525-4f29-908f-026970955862-kube-api-access-2587s\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.549198 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/59978ade-1525-4f29-908f-026970955862-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:05 crc kubenswrapper[4970]: I1209 12:30:05.847146 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83510120-26cc-4c80-be70-788fc8d78ba2" path="/var/lib/kubelet/pods/83510120-26cc-4c80-be70-788fc8d78ba2/volumes" Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.280084 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.284339 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-log" containerID="cri-o://371a1155897e81e46a278b04b0948efbb20ddde1b03332c7f16b608269f9b260" gracePeriod=30 Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.284581 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-httpd" containerID="cri-o://654f91d33fe33b1111abc302eac5953705bc2bf0cfdd87a9f0d7cf6a468c17e6" gracePeriod=30 Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.421945 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8kxt"] Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.535165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" event={"ID":"92e5909b-9f51-4a80-824b-b633efbed63b","Type":"ContainerStarted","Data":"6df1286eead60e7e22637f20aa7d0cf97f0039a6a1379ad9ffe89479bcef8e2d"} Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.549933 4970 generic.go:334] "Generic (PLEG): container finished" podID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerID="371a1155897e81e46a278b04b0948efbb20ddde1b03332c7f16b608269f9b260" exitCode=143 Dec 09 12:30:06 crc kubenswrapper[4970]: I1209 12:30:06.549983 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35f7dc61-35aa-42a4-adc2-e96f07905cd0","Type":"ContainerDied","Data":"371a1155897e81e46a278b04b0948efbb20ddde1b03332c7f16b608269f9b260"} Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.562066 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerID="7f76c34720835c9cf6966f4890b4c1b12dbdf9fa01d95785d94d81788022bbbc" exitCode=0 Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.562393 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b28cb7f-3918-4f2d-ba79-21503540a126","Type":"ContainerDied","Data":"7f76c34720835c9cf6966f4890b4c1b12dbdf9fa01d95785d94d81788022bbbc"} Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.708313 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.831619 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-combined-ca-bundle\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.831686 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-scripts\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.831820 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-public-tls-certs\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.831901 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-logs\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.831935 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-httpd-run\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.831972 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.832011 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl24g\" (UniqueName: \"kubernetes.io/projected/0b28cb7f-3918-4f2d-ba79-21503540a126-kube-api-access-pl24g\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.832073 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-config-data\") pod \"0b28cb7f-3918-4f2d-ba79-21503540a126\" (UID: \"0b28cb7f-3918-4f2d-ba79-21503540a126\") " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.833906 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-logs" (OuterVolumeSpecName: "logs") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.837295 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.846550 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.846564 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b28cb7f-3918-4f2d-ba79-21503540a126-kube-api-access-pl24g" (OuterVolumeSpecName: "kube-api-access-pl24g") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "kube-api-access-pl24g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.855101 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-scripts" (OuterVolumeSpecName: "scripts") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.879536 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.924373 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-config-data" (OuterVolumeSpecName: "config-data") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.937938 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.937978 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl24g\" (UniqueName: \"kubernetes.io/projected/0b28cb7f-3918-4f2d-ba79-21503540a126-kube-api-access-pl24g\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.937988 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.937998 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.938007 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.938015 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:07 crc kubenswrapper[4970]: I1209 12:30:07.938023 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0b28cb7f-3918-4f2d-ba79-21503540a126-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.015377 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.037543 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0b28cb7f-3918-4f2d-ba79-21503540a126" (UID: "0b28cb7f-3918-4f2d-ba79-21503540a126"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.042932 4970 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b28cb7f-3918-4f2d-ba79-21503540a126-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.042981 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.321337 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.321813 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="proxy-httpd" containerID="cri-o://f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf" gracePeriod=30 Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.321873 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="sg-core" containerID="cri-o://6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2" gracePeriod=30 Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.321992 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-notification-agent" containerID="cri-o://7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9" gracePeriod=30 Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.322190 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-central-agent" containerID="cri-o://2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978" gracePeriod=30 Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.338502 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.577494 4970 generic.go:334] "Generic (PLEG): container finished" podID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerID="f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf" exitCode=0 Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.577820 4970 generic.go:334] "Generic (PLEG): container finished" podID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerID="6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2" exitCode=2 Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.577521 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerDied","Data":"f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf"} Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.577881 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerDied","Data":"6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2"} Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.581828 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0b28cb7f-3918-4f2d-ba79-21503540a126","Type":"ContainerDied","Data":"24f18f67cfa6b0514d264bfd61d930272a48f8a3bf25558250f6ec0ecbfa5843"} Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.581892 4970 scope.go:117] "RemoveContainer" containerID="7f76c34720835c9cf6966f4890b4c1b12dbdf9fa01d95785d94d81788022bbbc" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.582053 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.653393 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.654380 4970 scope.go:117] "RemoveContainer" containerID="4219a62cfad34d1ae3a86959f52cac5bb37112c707c4a6863f78678a997f452f" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.674085 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.700374 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4970]: E1209 12:30:08.701630 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59978ade-1525-4f29-908f-026970955862" containerName="collect-profiles" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.701654 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="59978ade-1525-4f29-908f-026970955862" containerName="collect-profiles" Dec 09 12:30:08 crc kubenswrapper[4970]: E1209 12:30:08.701789 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-httpd" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.701805 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-httpd" Dec 09 12:30:08 crc kubenswrapper[4970]: E1209 12:30:08.701876 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-log" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.701884 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-log" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.703200 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-httpd" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.703281 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="59978ade-1525-4f29-908f-026970955862" containerName="collect-profiles" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.703299 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" containerName="glance-log" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.705691 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.708024 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.708414 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.711322 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.764556 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.765613 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.765798 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-scripts\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.765935 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be307043-99ae-477a-8134-c8e971674ff3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.766041 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.766166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-config-data\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.766277 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be307043-99ae-477a-8134-c8e971674ff3-logs\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.766593 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9mf\" (UniqueName: \"kubernetes.io/projected/be307043-99ae-477a-8134-c8e971674ff3-kube-api-access-fr9mf\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.870912 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be307043-99ae-477a-8134-c8e971674ff3-logs\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.871489 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr9mf\" (UniqueName: \"kubernetes.io/projected/be307043-99ae-477a-8134-c8e971674ff3-kube-api-access-fr9mf\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.871768 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.871587 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be307043-99ae-477a-8134-c8e971674ff3-logs\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.871900 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.873148 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.889623 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-scripts\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.889887 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be307043-99ae-477a-8134-c8e971674ff3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.890068 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.890278 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-config-data\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.891321 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/be307043-99ae-477a-8134-c8e971674ff3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.894690 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.895274 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-scripts\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.896715 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-config-data\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.900210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be307043-99ae-477a-8134-c8e971674ff3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.901672 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr9mf\" (UniqueName: \"kubernetes.io/projected/be307043-99ae-477a-8134-c8e971674ff3-kube-api-access-fr9mf\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:08 crc kubenswrapper[4970]: I1209 12:30:08.928486 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"be307043-99ae-477a-8134-c8e971674ff3\") " pod="openstack/glance-default-external-api-0" Dec 09 12:30:09 crc kubenswrapper[4970]: I1209 12:30:09.028493 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 12:30:09 crc kubenswrapper[4970]: I1209 12:30:09.595128 4970 generic.go:334] "Generic (PLEG): container finished" podID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerID="654f91d33fe33b1111abc302eac5953705bc2bf0cfdd87a9f0d7cf6a468c17e6" exitCode=0 Dec 09 12:30:09 crc kubenswrapper[4970]: I1209 12:30:09.595206 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35f7dc61-35aa-42a4-adc2-e96f07905cd0","Type":"ContainerDied","Data":"654f91d33fe33b1111abc302eac5953705bc2bf0cfdd87a9f0d7cf6a468c17e6"} Dec 09 12:30:09 crc kubenswrapper[4970]: I1209 12:30:09.599759 4970 generic.go:334] "Generic (PLEG): container finished" podID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerID="2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978" exitCode=0 Dec 09 12:30:09 crc kubenswrapper[4970]: I1209 12:30:09.599799 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerDied","Data":"2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978"} Dec 09 12:30:09 crc kubenswrapper[4970]: I1209 12:30:09.834867 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b28cb7f-3918-4f2d-ba79-21503540a126" path="/var/lib/kubelet/pods/0b28cb7f-3918-4f2d-ba79-21503540a126/volumes" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.141219 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.249817 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-httpd-run\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.249894 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-combined-ca-bundle\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.249941 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-config-data\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.250015 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-scripts\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.250080 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-internal-tls-certs\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.250112 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.250169 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-logs\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.250198 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5n8pj\" (UniqueName: \"kubernetes.io/projected/35f7dc61-35aa-42a4-adc2-e96f07905cd0-kube-api-access-5n8pj\") pod \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\" (UID: \"35f7dc61-35aa-42a4-adc2-e96f07905cd0\") " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.250705 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.251484 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.251718 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-logs" (OuterVolumeSpecName: "logs") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.275787 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-scripts" (OuterVolumeSpecName: "scripts") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.284394 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.331768 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f7dc61-35aa-42a4-adc2-e96f07905cd0-kube-api-access-5n8pj" (OuterVolumeSpecName: "kube-api-access-5n8pj") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "kube-api-access-5n8pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.339060 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.355570 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.355635 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.355647 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f7dc61-35aa-42a4-adc2-e96f07905cd0-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.355656 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5n8pj\" (UniqueName: \"kubernetes.io/projected/35f7dc61-35aa-42a4-adc2-e96f07905cd0-kube-api-access-5n8pj\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.378910 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.392493 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.394884 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-config-data" (OuterVolumeSpecName: "config-data") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.431158 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "35f7dc61-35aa-42a4-adc2-e96f07905cd0" (UID: "35f7dc61-35aa-42a4-adc2-e96f07905cd0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.459113 4970 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.459157 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.459170 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.459182 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f7dc61-35aa-42a4-adc2-e96f07905cd0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.673863 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"35f7dc61-35aa-42a4-adc2-e96f07905cd0","Type":"ContainerDied","Data":"9bbe05a3e2c97f157eb0f70367e20eed30cd807d622ddddbf99bf93c99503e72"} Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.674951 4970 scope.go:117] "RemoveContainer" containerID="654f91d33fe33b1111abc302eac5953705bc2bf0cfdd87a9f0d7cf6a468c17e6" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.674036 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.680950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be307043-99ae-477a-8134-c8e971674ff3","Type":"ContainerStarted","Data":"92c172aae6433599f8de2d90bbe3982c6ebcd76586fb72df104832050289ae22"} Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.786076 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.806866 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.836303 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:30:10 crc kubenswrapper[4970]: E1209 12:30:10.836821 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-httpd" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.836833 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-httpd" Dec 09 12:30:10 crc kubenswrapper[4970]: E1209 12:30:10.836846 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-log" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.836852 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-log" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.837082 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-log" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.837113 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" containerName="glance-httpd" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.838228 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.841437 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.841641 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.855334 4970 scope.go:117] "RemoveContainer" containerID="371a1155897e81e46a278b04b0948efbb20ddde1b03332c7f16b608269f9b260" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.857008 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971351 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971495 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971547 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c97aa48e-1a86-4909-8ec0-62d5599c18ed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971569 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971599 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c97aa48e-1a86-4909-8ec0-62d5599c18ed-logs\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971674 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjs5\" (UniqueName: \"kubernetes.io/projected/c97aa48e-1a86-4909-8ec0-62d5599c18ed-kube-api-access-jvjs5\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:10 crc kubenswrapper[4970]: I1209 12:30:10.971775 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.073977 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c97aa48e-1a86-4909-8ec0-62d5599c18ed-logs\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074076 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjs5\" (UniqueName: \"kubernetes.io/projected/c97aa48e-1a86-4909-8ec0-62d5599c18ed-kube-api-access-jvjs5\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074128 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074160 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074424 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074504 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c97aa48e-1a86-4909-8ec0-62d5599c18ed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.074542 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.085881 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.086185 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c97aa48e-1a86-4909-8ec0-62d5599c18ed-logs\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.086930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c97aa48e-1a86-4909-8ec0-62d5599c18ed-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.095163 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.114309 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.115144 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.123119 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjs5\" (UniqueName: \"kubernetes.io/projected/c97aa48e-1a86-4909-8ec0-62d5599c18ed-kube-api-access-jvjs5\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.141280 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c97aa48e-1a86-4909-8ec0-62d5599c18ed-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.245947 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-internal-api-0\" (UID: \"c97aa48e-1a86-4909-8ec0-62d5599c18ed\") " pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.386540 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485107 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j977\" (UniqueName: \"kubernetes.io/projected/7fff75d8-7b4d-47e2-893f-a401f568394e-kube-api-access-5j977\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485162 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-config-data\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485189 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-scripts\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485239 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-run-httpd\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485301 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-sg-core-conf-yaml\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485441 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-log-httpd\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.485510 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-combined-ca-bundle\") pod \"7fff75d8-7b4d-47e2-893f-a401f568394e\" (UID: \"7fff75d8-7b4d-47e2-893f-a401f568394e\") " Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.486782 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.490267 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.493452 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-scripts" (OuterVolumeSpecName: "scripts") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.493678 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.498468 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fff75d8-7b4d-47e2-893f-a401f568394e-kube-api-access-5j977" (OuterVolumeSpecName: "kube-api-access-5j977") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "kube-api-access-5j977". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.612822 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j977\" (UniqueName: \"kubernetes.io/projected/7fff75d8-7b4d-47e2-893f-a401f568394e-kube-api-access-5j977\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.612857 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.612874 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.612883 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fff75d8-7b4d-47e2-893f-a401f568394e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.723462 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.767596 4970 generic.go:334] "Generic (PLEG): container finished" podID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerID="7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9" exitCode=0 Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.767812 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.767827 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerDied","Data":"7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9"} Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.768836 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fff75d8-7b4d-47e2-893f-a401f568394e","Type":"ContainerDied","Data":"a1932d5962877aac65333e021fb7f492794c778ede4dacaefc477cef5b271fd4"} Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.768939 4970 scope.go:117] "RemoveContainer" containerID="f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.776426 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be307043-99ae-477a-8134-c8e971674ff3","Type":"ContainerStarted","Data":"3deda0bf5a2f5b67b097f480a85d79b2fce7690efe4911f167d03bbefa9a495e"} Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.825466 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.841205 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-config-data" (OuterVolumeSpecName: "config-data") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.854946 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f7dc61-35aa-42a4-adc2-e96f07905cd0" path="/var/lib/kubelet/pods/35f7dc61-35aa-42a4-adc2-e96f07905cd0/volumes" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.862948 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fff75d8-7b4d-47e2-893f-a401f568394e" (UID: "7fff75d8-7b4d-47e2-893f-a401f568394e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.932360 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:11 crc kubenswrapper[4970]: I1209 12:30:11.932415 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fff75d8-7b4d-47e2-893f-a401f568394e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.080030 4970 scope.go:117] "RemoveContainer" containerID="6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.156283 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.173227 4970 scope.go:117] "RemoveContainer" containerID="7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.194366 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.210191 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.210719 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="proxy-httpd" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.210745 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="proxy-httpd" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.210765 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-central-agent" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.210772 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-central-agent" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.210791 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="sg-core" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.210796 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="sg-core" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.210813 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-notification-agent" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.210819 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-notification-agent" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.211013 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-central-agent" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.211034 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="proxy-httpd" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.211044 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="ceilometer-notification-agent" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.211072 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" containerName="sg-core" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.213779 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fff75d8_7b4d_47e2_893f_a401f568394e.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.213854 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.218401 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.222026 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.234460 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.238918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdgg8\" (UniqueName: \"kubernetes.io/projected/67a1a774-21ff-40c6-abe9-07a54c4b3380-kube-api-access-jdgg8\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.238959 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-scripts\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.238991 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.241312 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-config-data\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.241541 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-run-httpd\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.241795 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.241838 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-log-httpd\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.266854 4970 scope.go:117] "RemoveContainer" containerID="2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.303888 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.333317 4970 scope.go:117] "RemoveContainer" containerID="f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf" Dec 09 12:30:12 crc kubenswrapper[4970]: W1209 12:30:12.333552 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc97aa48e_1a86_4909_8ec0_62d5599c18ed.slice/crio-0b034697f983c38b0dd985892d185814e227c5f43a3a244e7a0f6bd3d482cb33 WatchSource:0}: Error finding container 0b034697f983c38b0dd985892d185814e227c5f43a3a244e7a0f6bd3d482cb33: Status 404 returned error can't find the container with id 0b034697f983c38b0dd985892d185814e227c5f43a3a244e7a0f6bd3d482cb33 Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.334048 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf\": container with ID starting with f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf not found: ID does not exist" containerID="f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.334092 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf"} err="failed to get container status \"f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf\": rpc error: code = NotFound desc = could not find container \"f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf\": container with ID starting with f253eaa14920bda3599205efb1e162e90317348a041d45fefe549d348921dbbf not found: ID does not exist" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.334116 4970 scope.go:117] "RemoveContainer" containerID="6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.338146 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2\": container with ID starting with 6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2 not found: ID does not exist" containerID="6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.338183 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2"} err="failed to get container status \"6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2\": rpc error: code = NotFound desc = could not find container \"6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2\": container with ID starting with 6c53aaafd678120e691690a282db068db212bd78bf4a76e938d9b79d293d9ab2 not found: ID does not exist" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.338208 4970 scope.go:117] "RemoveContainer" containerID="7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.341729 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9\": container with ID starting with 7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9 not found: ID does not exist" containerID="7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.341758 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9"} err="failed to get container status \"7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9\": rpc error: code = NotFound desc = could not find container \"7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9\": container with ID starting with 7a3e5ce444149a467912ef666a2e05f3244e441af3f7870ed56f9b3fbb3811b9 not found: ID does not exist" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.341778 4970 scope.go:117] "RemoveContainer" containerID="2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.342108 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978\": container with ID starting with 2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978 not found: ID does not exist" containerID="2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.342131 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978"} err="failed to get container status \"2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978\": rpc error: code = NotFound desc = could not find container \"2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978\": container with ID starting with 2138dfc8702313c04c6c48ac32535d4748eaae83ee8af6235b20cf90ad836978 not found: ID does not exist" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.346548 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-config-data\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.346770 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-run-httpd\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.346859 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.346912 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-log-httpd\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.346988 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdgg8\" (UniqueName: \"kubernetes.io/projected/67a1a774-21ff-40c6-abe9-07a54c4b3380-kube-api-access-jdgg8\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.347014 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-scripts\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.347039 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.354485 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-log-httpd\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.357397 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.357729 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-run-httpd\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.359533 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-scripts\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.368469 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.373415 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-config-data\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.383736 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdgg8\" (UniqueName: \"kubernetes.io/projected/67a1a774-21ff-40c6-abe9-07a54c4b3380-kube-api-access-jdgg8\") pod \"ceilometer-0\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.584711 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.695878 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.705950 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.716308 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Dec 09 12:30:12 crc kubenswrapper[4970]: E1209 12:30:12.716371 4970 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-8679c95d76-9dntl" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerName="heat-engine" Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.815740 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"be307043-99ae-477a-8134-c8e971674ff3","Type":"ContainerStarted","Data":"448aff3bf20d674d3d1a55f8b2bd29261c6a88e9eacb876add4a8d86e9ad7cf7"} Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.824503 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c97aa48e-1a86-4909-8ec0-62d5599c18ed","Type":"ContainerStarted","Data":"0b034697f983c38b0dd985892d185814e227c5f43a3a244e7a0f6bd3d482cb33"} Dec 09 12:30:12 crc kubenswrapper[4970]: I1209 12:30:12.867167 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.867144262 podStartE2EDuration="4.867144262s" podCreationTimestamp="2025-12-09 12:30:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:12.854765234 +0000 UTC m=+1425.415246285" watchObservedRunningTime="2025-12-09 12:30:12.867144262 +0000 UTC m=+1425.427625313" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.250485 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:13 crc kubenswrapper[4970]: W1209 12:30:13.252852 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67a1a774_21ff_40c6_abe9_07a54c4b3380.slice/crio-512d41e04e2768697b9127085e9ae06759b28e7334b8e8fb11c7efed2c21a4e4 WatchSource:0}: Error finding container 512d41e04e2768697b9127085e9ae06759b28e7334b8e8fb11c7efed2c21a4e4: Status 404 returned error can't find the container with id 512d41e04e2768697b9127085e9ae06759b28e7334b8e8fb11c7efed2c21a4e4 Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.429907 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.492906 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2wfk\" (UniqueName: \"kubernetes.io/projected/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-kube-api-access-t2wfk\") pod \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.492980 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data\") pod \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.493082 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data-custom\") pod \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.493148 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-combined-ca-bundle\") pod \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\" (UID: \"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a\") " Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.499623 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-kube-api-access-t2wfk" (OuterVolumeSpecName: "kube-api-access-t2wfk") pod "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" (UID: "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a"). InnerVolumeSpecName "kube-api-access-t2wfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.505351 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" (UID: "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.554613 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" (UID: "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.598671 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2wfk\" (UniqueName: \"kubernetes.io/projected/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-kube-api-access-t2wfk\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.598713 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.598728 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.602539 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data" (OuterVolumeSpecName: "config-data") pod "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" (UID: "dec002e4-c154-4c9b-8e5c-391bd2c2ce8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.709449 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.830975 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fff75d8-7b4d-47e2-893f-a401f568394e" path="/var/lib/kubelet/pods/7fff75d8-7b4d-47e2-893f-a401f568394e/volumes" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.874585 4970 generic.go:334] "Generic (PLEG): container finished" podID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" exitCode=0 Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.874675 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8679c95d76-9dntl" event={"ID":"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a","Type":"ContainerDied","Data":"a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e"} Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.874716 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-8679c95d76-9dntl" event={"ID":"dec002e4-c154-4c9b-8e5c-391bd2c2ce8a","Type":"ContainerDied","Data":"7b67f42603dde57367769224eb4aca8194f0caa1dbac63a2d27b6b62851d199c"} Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.874743 4970 scope.go:117] "RemoveContainer" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.874935 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-8679c95d76-9dntl" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.884808 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c97aa48e-1a86-4909-8ec0-62d5599c18ed","Type":"ContainerStarted","Data":"ee40685d477f33024157d8d99f044a2f801db526f7cf2f6cb392ab66af4c3d32"} Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.887578 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerStarted","Data":"512d41e04e2768697b9127085e9ae06759b28e7334b8e8fb11c7efed2c21a4e4"} Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.922917 4970 scope.go:117] "RemoveContainer" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" Dec 09 12:30:13 crc kubenswrapper[4970]: E1209 12:30:13.924114 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e\": container with ID starting with a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e not found: ID does not exist" containerID="a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.924170 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e"} err="failed to get container status \"a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e\": rpc error: code = NotFound desc = could not find container \"a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e\": container with ID starting with a715dc7aa464f31deadee24e154cd31a9cd1a9d73088b71939fc1e477617749e not found: ID does not exist" Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.929124 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-8679c95d76-9dntl"] Dec 09 12:30:13 crc kubenswrapper[4970]: I1209 12:30:13.942565 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-8679c95d76-9dntl"] Dec 09 12:30:14 crc kubenswrapper[4970]: I1209 12:30:14.902140 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerStarted","Data":"536942f21382b461538aaf2c7951d070847c4c2a39c868397d6038d7fb469888"} Dec 09 12:30:14 crc kubenswrapper[4970]: I1209 12:30:14.906538 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c97aa48e-1a86-4909-8ec0-62d5599c18ed","Type":"ContainerStarted","Data":"b29a105575394131bab648684871cc380062d6c651778ad4acd083214a013a77"} Dec 09 12:30:14 crc kubenswrapper[4970]: I1209 12:30:14.928531 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.928516077 podStartE2EDuration="4.928516077s" podCreationTimestamp="2025-12-09 12:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:14.925380174 +0000 UTC m=+1427.485861245" watchObservedRunningTime="2025-12-09 12:30:14.928516077 +0000 UTC m=+1427.488997128" Dec 09 12:30:15 crc kubenswrapper[4970]: I1209 12:30:15.837519 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" path="/var/lib/kubelet/pods/dec002e4-c154-4c9b-8e5c-391bd2c2ce8a/volumes" Dec 09 12:30:17 crc kubenswrapper[4970]: I1209 12:30:17.097174 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:19 crc kubenswrapper[4970]: I1209 12:30:19.028343 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4970]: I1209 12:30:19.028926 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4970]: I1209 12:30:19.082684 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4970]: I1209 12:30:19.092822 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4970]: I1209 12:30:19.973811 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:30:19 crc kubenswrapper[4970]: I1209 12:30:19.973857 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 12:30:20 crc kubenswrapper[4970]: I1209 12:30:20.987379 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" event={"ID":"92e5909b-9f51-4a80-824b-b633efbed63b","Type":"ContainerStarted","Data":"2f7ff6aa9f0e389598fdafe9691fe6bdab04c3382bccad80ae1b2124230428d5"} Dec 09 12:30:21 crc kubenswrapper[4970]: I1209 12:30:21.014094 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" podStartSLOduration=2.828820369 podStartE2EDuration="17.014076266s" podCreationTimestamp="2025-12-09 12:30:04 +0000 UTC" firstStartedPulling="2025-12-09 12:30:06.442421763 +0000 UTC m=+1419.002902814" lastFinishedPulling="2025-12-09 12:30:20.62767766 +0000 UTC m=+1433.188158711" observedRunningTime="2025-12-09 12:30:21.005907949 +0000 UTC m=+1433.566389020" watchObservedRunningTime="2025-12-09 12:30:21.014076266 +0000 UTC m=+1433.574557317" Dec 09 12:30:21 crc kubenswrapper[4970]: I1209 12:30:21.488924 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:21 crc kubenswrapper[4970]: I1209 12:30:21.489233 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:21 crc kubenswrapper[4970]: I1209 12:30:21.528006 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:21 crc kubenswrapper[4970]: I1209 12:30:21.540839 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:22 crc kubenswrapper[4970]: I1209 12:30:22.015367 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerStarted","Data":"2c89b881ea482b9fb051f81454579c0248a7fb935bbb4171a1e5cd971fa97d51"} Dec 09 12:30:22 crc kubenswrapper[4970]: I1209 12:30:22.015778 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerStarted","Data":"cca4d2d176b2246bcfa423ef6fab3fac40f63a41751b5598c23c40efdf105eef"} Dec 09 12:30:22 crc kubenswrapper[4970]: I1209 12:30:22.015797 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:22 crc kubenswrapper[4970]: I1209 12:30:22.015853 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:22 crc kubenswrapper[4970]: I1209 12:30:22.015390 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:22 crc kubenswrapper[4970]: I1209 12:30:22.015879 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:24 crc kubenswrapper[4970]: I1209 12:30:24.027845 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:30:24 crc kubenswrapper[4970]: I1209 12:30:24.028521 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:24 crc kubenswrapper[4970]: I1209 12:30:24.108284 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.047833 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerStarted","Data":"64118f2e3cce3902d00a64d3a2d45489f25bf4a709cab01ba76a7d490ca50a72"} Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.048052 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-central-agent" containerID="cri-o://536942f21382b461538aaf2c7951d070847c4c2a39c868397d6038d7fb469888" gracePeriod=30 Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.048082 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="sg-core" containerID="cri-o://cca4d2d176b2246bcfa423ef6fab3fac40f63a41751b5598c23c40efdf105eef" gracePeriod=30 Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.048074 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="proxy-httpd" containerID="cri-o://64118f2e3cce3902d00a64d3a2d45489f25bf4a709cab01ba76a7d490ca50a72" gracePeriod=30 Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.048151 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-notification-agent" containerID="cri-o://2c89b881ea482b9fb051f81454579c0248a7fb935bbb4171a1e5cd971fa97d51" gracePeriod=30 Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.074214 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.3647025729999998 podStartE2EDuration="13.074195879s" podCreationTimestamp="2025-12-09 12:30:12 +0000 UTC" firstStartedPulling="2025-12-09 12:30:13.266130344 +0000 UTC m=+1425.826611385" lastFinishedPulling="2025-12-09 12:30:23.97562364 +0000 UTC m=+1436.536104691" observedRunningTime="2025-12-09 12:30:25.070025328 +0000 UTC m=+1437.630506379" watchObservedRunningTime="2025-12-09 12:30:25.074195879 +0000 UTC m=+1437.634676920" Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.208530 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.208629 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 12:30:25 crc kubenswrapper[4970]: I1209 12:30:25.411009 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.065490 4970 generic.go:334] "Generic (PLEG): container finished" podID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerID="64118f2e3cce3902d00a64d3a2d45489f25bf4a709cab01ba76a7d490ca50a72" exitCode=0 Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.066009 4970 generic.go:334] "Generic (PLEG): container finished" podID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerID="cca4d2d176b2246bcfa423ef6fab3fac40f63a41751b5598c23c40efdf105eef" exitCode=2 Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.066018 4970 generic.go:334] "Generic (PLEG): container finished" podID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerID="2c89b881ea482b9fb051f81454579c0248a7fb935bbb4171a1e5cd971fa97d51" exitCode=0 Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.066026 4970 generic.go:334] "Generic (PLEG): container finished" podID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerID="536942f21382b461538aaf2c7951d070847c4c2a39c868397d6038d7fb469888" exitCode=0 Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.067088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerDied","Data":"64118f2e3cce3902d00a64d3a2d45489f25bf4a709cab01ba76a7d490ca50a72"} Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.067115 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerDied","Data":"cca4d2d176b2246bcfa423ef6fab3fac40f63a41751b5598c23c40efdf105eef"} Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.067126 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerDied","Data":"2c89b881ea482b9fb051f81454579c0248a7fb935bbb4171a1e5cd971fa97d51"} Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.067134 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerDied","Data":"536942f21382b461538aaf2c7951d070847c4c2a39c868397d6038d7fb469888"} Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.067143 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a1a774-21ff-40c6-abe9-07a54c4b3380","Type":"ContainerDied","Data":"512d41e04e2768697b9127085e9ae06759b28e7334b8e8fb11c7efed2c21a4e4"} Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.067151 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="512d41e04e2768697b9127085e9ae06759b28e7334b8e8fb11c7efed2c21a4e4" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.112157 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258186 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-log-httpd\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258420 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-run-httpd\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258597 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258602 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdgg8\" (UniqueName: \"kubernetes.io/projected/67a1a774-21ff-40c6-abe9-07a54c4b3380-kube-api-access-jdgg8\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258656 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-sg-core-conf-yaml\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258723 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-combined-ca-bundle\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258771 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-scripts\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.258799 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-config-data\") pod \"67a1a774-21ff-40c6-abe9-07a54c4b3380\" (UID: \"67a1a774-21ff-40c6-abe9-07a54c4b3380\") " Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.259507 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.260073 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.260093 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a1a774-21ff-40c6-abe9-07a54c4b3380-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.264100 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a1a774-21ff-40c6-abe9-07a54c4b3380-kube-api-access-jdgg8" (OuterVolumeSpecName: "kube-api-access-jdgg8") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "kube-api-access-jdgg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.264541 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-scripts" (OuterVolumeSpecName: "scripts") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.312868 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.363706 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdgg8\" (UniqueName: \"kubernetes.io/projected/67a1a774-21ff-40c6-abe9-07a54c4b3380-kube-api-access-jdgg8\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.363931 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.363950 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.368072 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.434597 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-config-data" (OuterVolumeSpecName: "config-data") pod "67a1a774-21ff-40c6-abe9-07a54c4b3380" (UID: "67a1a774-21ff-40c6-abe9-07a54c4b3380"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.466279 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:26 crc kubenswrapper[4970]: I1209 12:30:26.466311 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a1a774-21ff-40c6-abe9-07a54c4b3380-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.078205 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.127604 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.137776 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175079 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:27 crc kubenswrapper[4970]: E1209 12:30:27.175531 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerName="heat-engine" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175548 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerName="heat-engine" Dec 09 12:30:27 crc kubenswrapper[4970]: E1209 12:30:27.175572 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="sg-core" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175578 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="sg-core" Dec 09 12:30:27 crc kubenswrapper[4970]: E1209 12:30:27.175610 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="proxy-httpd" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175616 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="proxy-httpd" Dec 09 12:30:27 crc kubenswrapper[4970]: E1209 12:30:27.175627 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-central-agent" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175633 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-central-agent" Dec 09 12:30:27 crc kubenswrapper[4970]: E1209 12:30:27.175645 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-notification-agent" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175651 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-notification-agent" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175848 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="proxy-httpd" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175862 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="sg-core" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175878 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-central-agent" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175890 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec002e4-c154-4c9b-8e5c-391bd2c2ce8a" containerName="heat-engine" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.175904 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" containerName="ceilometer-notification-agent" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.177806 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.184861 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.185224 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.198689 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.283817 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-scripts\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.283957 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-run-httpd\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.284004 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-config-data\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.284100 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-log-httpd\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.284195 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.284303 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sg4z\" (UniqueName: \"kubernetes.io/projected/de98ff62-a0ea-4874-a72f-caca52fcdb4e-kube-api-access-6sg4z\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.284343 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386018 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-scripts\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-run-httpd\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386178 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-config-data\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386373 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-log-httpd\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386463 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386544 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sg4z\" (UniqueName: \"kubernetes.io/projected/de98ff62-a0ea-4874-a72f-caca52fcdb4e-kube-api-access-6sg4z\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386584 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.386891 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-run-httpd\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.387150 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-log-httpd\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.403178 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.405927 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.406117 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-config-data\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.406703 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-scripts\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.413056 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sg4z\" (UniqueName: \"kubernetes.io/projected/de98ff62-a0ea-4874-a72f-caca52fcdb4e-kube-api-access-6sg4z\") pod \"ceilometer-0\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.498216 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.858182 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a1a774-21ff-40c6-abe9-07a54c4b3380" path="/var/lib/kubelet/pods/67a1a774-21ff-40c6-abe9-07a54c4b3380/volumes" Dec 09 12:30:27 crc kubenswrapper[4970]: W1209 12:30:27.990978 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde98ff62_a0ea_4874_a72f_caca52fcdb4e.slice/crio-2019f6a6a1bb740bb097e37931bff0a96a6cf400af9e190552e645049c8a22ea WatchSource:0}: Error finding container 2019f6a6a1bb740bb097e37931bff0a96a6cf400af9e190552e645049c8a22ea: Status 404 returned error can't find the container with id 2019f6a6a1bb740bb097e37931bff0a96a6cf400af9e190552e645049c8a22ea Dec 09 12:30:27 crc kubenswrapper[4970]: I1209 12:30:27.991113 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:28 crc kubenswrapper[4970]: I1209 12:30:28.089508 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerStarted","Data":"2019f6a6a1bb740bb097e37931bff0a96a6cf400af9e190552e645049c8a22ea"} Dec 09 12:30:29 crc kubenswrapper[4970]: I1209 12:30:29.231585 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:30 crc kubenswrapper[4970]: I1209 12:30:30.111324 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerStarted","Data":"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924"} Dec 09 12:30:31 crc kubenswrapper[4970]: I1209 12:30:31.128694 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerStarted","Data":"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913"} Dec 09 12:30:32 crc kubenswrapper[4970]: I1209 12:30:32.139801 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerStarted","Data":"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b"} Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.153106 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerStarted","Data":"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9"} Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.153650 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.153317 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="proxy-httpd" containerID="cri-o://9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" gracePeriod=30 Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.153261 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-central-agent" containerID="cri-o://4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" gracePeriod=30 Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.153386 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-notification-agent" containerID="cri-o://f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" gracePeriod=30 Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.153328 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="sg-core" containerID="cri-o://bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" gracePeriod=30 Dec 09 12:30:33 crc kubenswrapper[4970]: I1209 12:30:33.190606 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.70772704 podStartE2EDuration="6.190571915s" podCreationTimestamp="2025-12-09 12:30:27 +0000 UTC" firstStartedPulling="2025-12-09 12:30:27.993128813 +0000 UTC m=+1440.553609884" lastFinishedPulling="2025-12-09 12:30:32.475973708 +0000 UTC m=+1445.036454759" observedRunningTime="2025-12-09 12:30:33.177609321 +0000 UTC m=+1445.738090392" watchObservedRunningTime="2025-12-09 12:30:33.190571915 +0000 UTC m=+1445.751052966" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.120641 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176102 4970 generic.go:334] "Generic (PLEG): container finished" podID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" exitCode=0 Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176131 4970 generic.go:334] "Generic (PLEG): container finished" podID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" exitCode=2 Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176142 4970 generic.go:334] "Generic (PLEG): container finished" podID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" exitCode=0 Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176150 4970 generic.go:334] "Generic (PLEG): container finished" podID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" exitCode=0 Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176169 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerDied","Data":"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9"} Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176195 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerDied","Data":"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b"} Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176205 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerDied","Data":"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913"} Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176214 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerDied","Data":"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924"} Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"de98ff62-a0ea-4874-a72f-caca52fcdb4e","Type":"ContainerDied","Data":"2019f6a6a1bb740bb097e37931bff0a96a6cf400af9e190552e645049c8a22ea"} Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176256 4970 scope.go:117] "RemoveContainer" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.176456 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.221241 4970 scope.go:117] "RemoveContainer" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.243822 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-log-httpd\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.243882 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-sg-core-conf-yaml\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.243959 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-combined-ca-bundle\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.243987 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sg4z\" (UniqueName: \"kubernetes.io/projected/de98ff62-a0ea-4874-a72f-caca52fcdb4e-kube-api-access-6sg4z\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.244034 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-run-httpd\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.244198 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-config-data\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.244226 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-scripts\") pod \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\" (UID: \"de98ff62-a0ea-4874-a72f-caca52fcdb4e\") " Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.245487 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.245968 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.248466 4970 scope.go:117] "RemoveContainer" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.264031 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de98ff62-a0ea-4874-a72f-caca52fcdb4e-kube-api-access-6sg4z" (OuterVolumeSpecName: "kube-api-access-6sg4z") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "kube-api-access-6sg4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.272335 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-scripts" (OuterVolumeSpecName: "scripts") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.282398 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.319845 4970 scope.go:117] "RemoveContainer" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.331935 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.346875 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.346905 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.346915 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.346925 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sg4z\" (UniqueName: \"kubernetes.io/projected/de98ff62-a0ea-4874-a72f-caca52fcdb4e-kube-api-access-6sg4z\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.346940 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/de98ff62-a0ea-4874-a72f-caca52fcdb4e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.346948 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.347116 4970 scope.go:117] "RemoveContainer" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.347515 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": container with ID starting with 9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9 not found: ID does not exist" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.347547 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9"} err="failed to get container status \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": rpc error: code = NotFound desc = could not find container \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": container with ID starting with 9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.347567 4970 scope.go:117] "RemoveContainer" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.348032 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": container with ID starting with bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b not found: ID does not exist" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.348054 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b"} err="failed to get container status \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": rpc error: code = NotFound desc = could not find container \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": container with ID starting with bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.348069 4970 scope.go:117] "RemoveContainer" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.349014 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": container with ID starting with f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913 not found: ID does not exist" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349045 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913"} err="failed to get container status \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": rpc error: code = NotFound desc = could not find container \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": container with ID starting with f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349062 4970 scope.go:117] "RemoveContainer" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.349312 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": container with ID starting with 4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924 not found: ID does not exist" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349331 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924"} err="failed to get container status \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": rpc error: code = NotFound desc = could not find container \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": container with ID starting with 4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349344 4970 scope.go:117] "RemoveContainer" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349629 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9"} err="failed to get container status \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": rpc error: code = NotFound desc = could not find container \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": container with ID starting with 9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349653 4970 scope.go:117] "RemoveContainer" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349918 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b"} err="failed to get container status \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": rpc error: code = NotFound desc = could not find container \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": container with ID starting with bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.349944 4970 scope.go:117] "RemoveContainer" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.350202 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913"} err="failed to get container status \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": rpc error: code = NotFound desc = could not find container \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": container with ID starting with f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.350225 4970 scope.go:117] "RemoveContainer" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.350503 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924"} err="failed to get container status \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": rpc error: code = NotFound desc = could not find container \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": container with ID starting with 4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.350524 4970 scope.go:117] "RemoveContainer" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.350743 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9"} err="failed to get container status \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": rpc error: code = NotFound desc = could not find container \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": container with ID starting with 9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.350768 4970 scope.go:117] "RemoveContainer" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351027 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b"} err="failed to get container status \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": rpc error: code = NotFound desc = could not find container \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": container with ID starting with bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351046 4970 scope.go:117] "RemoveContainer" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351262 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913"} err="failed to get container status \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": rpc error: code = NotFound desc = could not find container \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": container with ID starting with f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351284 4970 scope.go:117] "RemoveContainer" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351476 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924"} err="failed to get container status \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": rpc error: code = NotFound desc = could not find container \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": container with ID starting with 4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351495 4970 scope.go:117] "RemoveContainer" containerID="9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351679 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9"} err="failed to get container status \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": rpc error: code = NotFound desc = could not find container \"9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9\": container with ID starting with 9be666b0031d9bae492c28b1a958a4a12b777cccbb6a8681e24ebedda2c240f9 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351695 4970 scope.go:117] "RemoveContainer" containerID="bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351883 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b"} err="failed to get container status \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": rpc error: code = NotFound desc = could not find container \"bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b\": container with ID starting with bd4e31033db666b1e8d967b25db610ed34008ed6946bd3168e0db51e7a891b6b not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.351900 4970 scope.go:117] "RemoveContainer" containerID="f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.352085 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913"} err="failed to get container status \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": rpc error: code = NotFound desc = could not find container \"f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913\": container with ID starting with f1022e8d844a4906df659e038e7848c964f275bd4896c9af501fd58ab3624913 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.352100 4970 scope.go:117] "RemoveContainer" containerID="4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.352456 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924"} err="failed to get container status \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": rpc error: code = NotFound desc = could not find container \"4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924\": container with ID starting with 4a5027dd7bbe6ac31445a2eb2e1f75a0267c9e67220d196bbba174304dec6924 not found: ID does not exist" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.365301 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-config-data" (OuterVolumeSpecName: "config-data") pod "de98ff62-a0ea-4874-a72f-caca52fcdb4e" (UID: "de98ff62-a0ea-4874-a72f-caca52fcdb4e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.449120 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de98ff62-a0ea-4874-a72f-caca52fcdb4e-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.518905 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.535471 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.547638 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.548198 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="sg-core" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548217 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="sg-core" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.548257 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-notification-agent" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548265 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-notification-agent" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.548303 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-central-agent" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548315 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-central-agent" Dec 09 12:30:34 crc kubenswrapper[4970]: E1209 12:30:34.548329 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="proxy-httpd" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548335 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="proxy-httpd" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548534 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-central-agent" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548548 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="sg-core" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548562 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="proxy-httpd" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.548573 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" containerName="ceilometer-notification-agent" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.550722 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.556917 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.557062 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.558556 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652504 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-scripts\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652577 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-log-httpd\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652630 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652669 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-config-data\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652687 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-run-httpd\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652763 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.652781 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57ln4\" (UniqueName: \"kubernetes.io/projected/88c6c852-5b17-4dac-b7de-22891a28d17a-kube-api-access-57ln4\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.754669 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.754765 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-config-data\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.754803 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-run-httpd\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.754923 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.754945 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57ln4\" (UniqueName: \"kubernetes.io/projected/88c6c852-5b17-4dac-b7de-22891a28d17a-kube-api-access-57ln4\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.755003 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-scripts\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.755069 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-log-httpd\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.755897 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-log-httpd\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.756577 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-run-httpd\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.762920 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-scripts\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.763177 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-config-data\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.763686 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.763872 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.780183 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57ln4\" (UniqueName: \"kubernetes.io/projected/88c6c852-5b17-4dac-b7de-22891a28d17a-kube-api-access-57ln4\") pod \"ceilometer-0\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " pod="openstack/ceilometer-0" Dec 09 12:30:34 crc kubenswrapper[4970]: I1209 12:30:34.959352 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:30:35 crc kubenswrapper[4970]: I1209 12:30:35.202188 4970 generic.go:334] "Generic (PLEG): container finished" podID="92e5909b-9f51-4a80-824b-b633efbed63b" containerID="2f7ff6aa9f0e389598fdafe9691fe6bdab04c3382bccad80ae1b2124230428d5" exitCode=0 Dec 09 12:30:35 crc kubenswrapper[4970]: I1209 12:30:35.202578 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" event={"ID":"92e5909b-9f51-4a80-824b-b633efbed63b","Type":"ContainerDied","Data":"2f7ff6aa9f0e389598fdafe9691fe6bdab04c3382bccad80ae1b2124230428d5"} Dec 09 12:30:35 crc kubenswrapper[4970]: I1209 12:30:35.588390 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:35 crc kubenswrapper[4970]: I1209 12:30:35.825745 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de98ff62-a0ea-4874-a72f-caca52fcdb4e" path="/var/lib/kubelet/pods/de98ff62-a0ea-4874-a72f-caca52fcdb4e/volumes" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.217566 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerStarted","Data":"dd21f3671bbd7361d2d79fdd850f5f5726b25d5a40ade7fbda729c8fc4ebbb0e"} Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.583838 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.708883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-combined-ca-bundle\") pod \"92e5909b-9f51-4a80-824b-b633efbed63b\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.708944 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqzfk\" (UniqueName: \"kubernetes.io/projected/92e5909b-9f51-4a80-824b-b633efbed63b-kube-api-access-zqzfk\") pod \"92e5909b-9f51-4a80-824b-b633efbed63b\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.709054 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-scripts\") pod \"92e5909b-9f51-4a80-824b-b633efbed63b\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.709090 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-config-data\") pod \"92e5909b-9f51-4a80-824b-b633efbed63b\" (UID: \"92e5909b-9f51-4a80-824b-b633efbed63b\") " Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.738438 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-scripts" (OuterVolumeSpecName: "scripts") pod "92e5909b-9f51-4a80-824b-b633efbed63b" (UID: "92e5909b-9f51-4a80-824b-b633efbed63b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.742841 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92e5909b-9f51-4a80-824b-b633efbed63b-kube-api-access-zqzfk" (OuterVolumeSpecName: "kube-api-access-zqzfk") pod "92e5909b-9f51-4a80-824b-b633efbed63b" (UID: "92e5909b-9f51-4a80-824b-b633efbed63b"). InnerVolumeSpecName "kube-api-access-zqzfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.764600 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92e5909b-9f51-4a80-824b-b633efbed63b" (UID: "92e5909b-9f51-4a80-824b-b633efbed63b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.812443 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqzfk\" (UniqueName: \"kubernetes.io/projected/92e5909b-9f51-4a80-824b-b633efbed63b-kube-api-access-zqzfk\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.812477 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.812489 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.817379 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-config-data" (OuterVolumeSpecName: "config-data") pod "92e5909b-9f51-4a80-824b-b633efbed63b" (UID: "92e5909b-9f51-4a80-824b-b633efbed63b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:36 crc kubenswrapper[4970]: I1209 12:30:36.914562 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92e5909b-9f51-4a80-824b-b633efbed63b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.229483 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerStarted","Data":"4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745"} Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.229533 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerStarted","Data":"a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9"} Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.232594 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" event={"ID":"92e5909b-9f51-4a80-824b-b633efbed63b","Type":"ContainerDied","Data":"6df1286eead60e7e22637f20aa7d0cf97f0039a6a1379ad9ffe89479bcef8e2d"} Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.232636 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df1286eead60e7e22637f20aa7d0cf97f0039a6a1379ad9ffe89479bcef8e2d" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.232674 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-m8kxt" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.344734 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 12:30:37 crc kubenswrapper[4970]: E1209 12:30:37.345422 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92e5909b-9f51-4a80-824b-b633efbed63b" containerName="nova-cell0-conductor-db-sync" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.345495 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="92e5909b-9f51-4a80-824b-b633efbed63b" containerName="nova-cell0-conductor-db-sync" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.345748 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="92e5909b-9f51-4a80-824b-b633efbed63b" containerName="nova-cell0-conductor-db-sync" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.346568 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.351000 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-vrvng" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.351454 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.363859 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.428918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424dcec5-0989-4b2e-8435-a2767dba2505-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.429490 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424dcec5-0989-4b2e-8435-a2767dba2505-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.429573 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqrxx\" (UniqueName: \"kubernetes.io/projected/424dcec5-0989-4b2e-8435-a2767dba2505-kube-api-access-xqrxx\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.531560 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqrxx\" (UniqueName: \"kubernetes.io/projected/424dcec5-0989-4b2e-8435-a2767dba2505-kube-api-access-xqrxx\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.531774 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424dcec5-0989-4b2e-8435-a2767dba2505-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.531802 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424dcec5-0989-4b2e-8435-a2767dba2505-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.535691 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/424dcec5-0989-4b2e-8435-a2767dba2505-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.535759 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/424dcec5-0989-4b2e-8435-a2767dba2505-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.556103 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqrxx\" (UniqueName: \"kubernetes.io/projected/424dcec5-0989-4b2e-8435-a2767dba2505-kube-api-access-xqrxx\") pod \"nova-cell0-conductor-0\" (UID: \"424dcec5-0989-4b2e-8435-a2767dba2505\") " pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.564795 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:30:37 crc kubenswrapper[4970]: I1209 12:30:37.667713 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:38 crc kubenswrapper[4970]: I1209 12:30:38.140597 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 12:30:38 crc kubenswrapper[4970]: W1209 12:30:38.144241 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod424dcec5_0989_4b2e_8435_a2767dba2505.slice/crio-4594671bde9e265297a784b89d6dc4ae101cfa52378e4e96906a0c1d7756859f WatchSource:0}: Error finding container 4594671bde9e265297a784b89d6dc4ae101cfa52378e4e96906a0c1d7756859f: Status 404 returned error can't find the container with id 4594671bde9e265297a784b89d6dc4ae101cfa52378e4e96906a0c1d7756859f Dec 09 12:30:38 crc kubenswrapper[4970]: I1209 12:30:38.251283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"424dcec5-0989-4b2e-8435-a2767dba2505","Type":"ContainerStarted","Data":"4594671bde9e265297a784b89d6dc4ae101cfa52378e4e96906a0c1d7756859f"} Dec 09 12:30:38 crc kubenswrapper[4970]: I1209 12:30:38.258549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerStarted","Data":"c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57"} Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.269990 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"424dcec5-0989-4b2e-8435-a2767dba2505","Type":"ContainerStarted","Data":"ba0bb33145d134c29720a98c496780c2d886774435a72e2cb58127156554f4d8"} Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.270387 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.274112 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerStarted","Data":"910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87"} Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.274417 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-central-agent" containerID="cri-o://a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9" gracePeriod=30 Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.274554 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.274620 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="proxy-httpd" containerID="cri-o://910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87" gracePeriod=30 Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.274704 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="sg-core" containerID="cri-o://c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57" gracePeriod=30 Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.274723 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-notification-agent" containerID="cri-o://4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745" gracePeriod=30 Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.305870 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.305828377 podStartE2EDuration="2.305828377s" podCreationTimestamp="2025-12-09 12:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:39.284221633 +0000 UTC m=+1451.844702684" watchObservedRunningTime="2025-12-09 12:30:39.305828377 +0000 UTC m=+1451.866309428" Dec 09 12:30:39 crc kubenswrapper[4970]: I1209 12:30:39.330503 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.02285999 podStartE2EDuration="5.330483871s" podCreationTimestamp="2025-12-09 12:30:34 +0000 UTC" firstStartedPulling="2025-12-09 12:30:35.609527579 +0000 UTC m=+1448.170008650" lastFinishedPulling="2025-12-09 12:30:38.91715148 +0000 UTC m=+1451.477632531" observedRunningTime="2025-12-09 12:30:39.319593962 +0000 UTC m=+1451.880075023" watchObservedRunningTime="2025-12-09 12:30:39.330483871 +0000 UTC m=+1451.890964922" Dec 09 12:30:40 crc kubenswrapper[4970]: I1209 12:30:40.288078 4970 generic.go:334] "Generic (PLEG): container finished" podID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerID="c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57" exitCode=2 Dec 09 12:30:40 crc kubenswrapper[4970]: I1209 12:30:40.288407 4970 generic.go:334] "Generic (PLEG): container finished" podID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerID="4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745" exitCode=0 Dec 09 12:30:40 crc kubenswrapper[4970]: I1209 12:30:40.288146 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerDied","Data":"c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57"} Dec 09 12:30:40 crc kubenswrapper[4970]: I1209 12:30:40.288457 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerDied","Data":"4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745"} Dec 09 12:30:46 crc kubenswrapper[4970]: I1209 12:30:46.366017 4970 generic.go:334] "Generic (PLEG): container finished" podID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerID="a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9" exitCode=0 Dec 09 12:30:46 crc kubenswrapper[4970]: I1209 12:30:46.366108 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerDied","Data":"a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9"} Dec 09 12:30:47 crc kubenswrapper[4970]: I1209 12:30:47.699300 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.179943 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-xx4x2"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.185593 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.188579 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.188824 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.198429 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xx4x2"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.337408 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz6rq\" (UniqueName: \"kubernetes.io/projected/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-kube-api-access-sz6rq\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.337514 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-config-data\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.337606 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-scripts\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.337671 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.360410 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.362976 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.375709 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.386018 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.406642 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.408081 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.419307 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.419427 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440229 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g882t\" (UniqueName: \"kubernetes.io/projected/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-kube-api-access-g882t\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440351 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440445 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-config-data\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440506 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-logs\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz6rq\" (UniqueName: \"kubernetes.io/projected/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-kube-api-access-sz6rq\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440626 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-config-data\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.440734 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-scripts\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.456158 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-scripts\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.457133 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.479083 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-config-data\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.486309 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.491048 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.492456 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz6rq\" (UniqueName: \"kubernetes.io/projected/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-kube-api-access-sz6rq\") pod \"nova-cell0-cell-mapping-xx4x2\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.518294 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.533208 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.586621 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-logs\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.587166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.587219 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.587519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g882t\" (UniqueName: \"kubernetes.io/projected/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-kube-api-access-g882t\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.587570 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t22dz\" (UniqueName: \"kubernetes.io/projected/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-kube-api-access-t22dz\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.587669 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.587778 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-config-data\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.590711 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-logs\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.608333 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.625197 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-config-data\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.625466 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g882t\" (UniqueName: \"kubernetes.io/projected/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-kube-api-access-g882t\") pod \"nova-api-0\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.699910 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.705298 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.706600 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t22dz\" (UniqueName: \"kubernetes.io/projected/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-kube-api-access-t22dz\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.707952 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-config-data\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.708466 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9g8r\" (UniqueName: \"kubernetes.io/projected/92d360ed-aab2-478f-985d-eef9214facf2-kube-api-access-p9g8r\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.708854 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.709032 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.716980 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.718632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.733857 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.735512 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t22dz\" (UniqueName: \"kubernetes.io/projected/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-kube-api-access-t22dz\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.738730 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.806576 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.808851 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.811678 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-config-data\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.811723 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9g8r\" (UniqueName: \"kubernetes.io/projected/92d360ed-aab2-478f-985d-eef9214facf2-kube-api-access-p9g8r\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.811913 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.817726 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.823021 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-config-data\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.823650 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.848786 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9g8r\" (UniqueName: \"kubernetes.io/projected/92d360ed-aab2-478f-985d-eef9214facf2-kube-api-access-p9g8r\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.853623 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rg4sn"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.855517 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " pod="openstack/nova-scheduler-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.856513 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.874404 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rg4sn"] Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.921581 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adece173-10f9-4c86-aa83-2626d0bc526c-logs\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.922059 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-config-data\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.922170 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jmkp\" (UniqueName: \"kubernetes.io/projected/adece173-10f9-4c86-aa83-2626d0bc526c-kube-api-access-4jmkp\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:48 crc kubenswrapper[4970]: I1209 12:30:48.922320 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030561 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030658 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lfq9\" (UniqueName: \"kubernetes.io/projected/d531e31f-a903-40b8-b91f-0579c272cb87-kube-api-access-5lfq9\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030695 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030789 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030819 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adece173-10f9-4c86-aa83-2626d0bc526c-logs\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030928 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030957 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-svc\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.030985 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-config-data\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.031026 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.034021 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jmkp\" (UniqueName: \"kubernetes.io/projected/adece173-10f9-4c86-aa83-2626d0bc526c-kube-api-access-4jmkp\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.036511 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adece173-10f9-4c86-aa83-2626d0bc526c-logs\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.041176 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-config-data\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.041376 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.064792 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jmkp\" (UniqueName: \"kubernetes.io/projected/adece173-10f9-4c86-aa83-2626d0bc526c-kube-api-access-4jmkp\") pod \"nova-metadata-0\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.070692 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.145567 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.145604 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-svc\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.145634 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.145704 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lfq9\" (UniqueName: \"kubernetes.io/projected/d531e31f-a903-40b8-b91f-0579c272cb87-kube-api-access-5lfq9\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.145726 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.145782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.146450 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.146465 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-svc\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.146504 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.146971 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.147296 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.171619 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.172082 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lfq9\" (UniqueName: \"kubernetes.io/projected/d531e31f-a903-40b8-b91f-0579c272cb87-kube-api-access-5lfq9\") pod \"dnsmasq-dns-9b86998b5-rg4sn\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.209714 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.244325 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-xx4x2"] Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.372051 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgscz"] Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.373934 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.376387 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.376711 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.417756 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgscz"] Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.457063 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-scripts\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.457173 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-config-data\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.457194 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76xd\" (UniqueName: \"kubernetes.io/projected/854860ea-b756-4edd-88ea-6f1ad333f7bc-kube-api-access-m76xd\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.457329 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.479813 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xx4x2" event={"ID":"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b","Type":"ContainerStarted","Data":"3a9790628799d0667aac1b81d273265c9892616ee7ae2037c87c3fc2ad95a0ba"} Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.556507 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.559894 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-scripts\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.560007 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-config-data\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.560035 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m76xd\" (UniqueName: \"kubernetes.io/projected/854860ea-b756-4edd-88ea-6f1ad333f7bc-kube-api-access-m76xd\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.560090 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.577584 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m76xd\" (UniqueName: \"kubernetes.io/projected/854860ea-b756-4edd-88ea-6f1ad333f7bc-kube-api-access-m76xd\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.578035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-scripts\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.583946 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-config-data\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.587368 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dgscz\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.596532 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.717987 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:30:49 crc kubenswrapper[4970]: I1209 12:30:49.948403 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:30:50 crc kubenswrapper[4970]: W1209 12:30:50.008669 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadece173_10f9_4c86_aa83_2626d0bc526c.slice/crio-7a6897c11cd7c8dd41c439b33fdfa5442e65392633dd661d59b69c959d18054d WatchSource:0}: Error finding container 7a6897c11cd7c8dd41c439b33fdfa5442e65392633dd661d59b69c959d18054d: Status 404 returned error can't find the container with id 7a6897c11cd7c8dd41c439b33fdfa5442e65392633dd661d59b69c959d18054d Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.013392 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.057132 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rg4sn"] Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.398536 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgscz"] Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.567613 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgscz" event={"ID":"854860ea-b756-4edd-88ea-6f1ad333f7bc","Type":"ContainerStarted","Data":"b97d3b705613db43350391e0663cac8325cfbd255591cc4e1114d0cafcbc1504"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.573783 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a","Type":"ContainerStarted","Data":"99cc1847307ac435880c4164ae53cfb8134561d541f88e3cc4f0852ec341e48f"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.575839 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d360ed-aab2-478f-985d-eef9214facf2","Type":"ContainerStarted","Data":"a872d377a90499459bdb4e0ae08734f55b5a67da0ad3608139bb3a0709feedab"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.579933 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"adece173-10f9-4c86-aa83-2626d0bc526c","Type":"ContainerStarted","Data":"7a6897c11cd7c8dd41c439b33fdfa5442e65392633dd661d59b69c959d18054d"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.590971 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95","Type":"ContainerStarted","Data":"3d9c19431b411259a3c4cc0abea83447fe5b6a1aa99416797ab6e3c0f68f4ce1"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.596911 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xx4x2" event={"ID":"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b","Type":"ContainerStarted","Data":"7ac67b7728e2e1ecb824af3fa21339f8b5a1397f3fb341ba9a20469473e853c6"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.603444 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" event={"ID":"d531e31f-a903-40b8-b91f-0579c272cb87","Type":"ContainerStarted","Data":"cafbd1623b3e8bf221e8a7bb7d6850170dfd1490fa1e672540321dce481ff806"} Dec 09 12:30:50 crc kubenswrapper[4970]: I1209 12:30:50.631198 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-xx4x2" podStartSLOduration=2.631179625 podStartE2EDuration="2.631179625s" podCreationTimestamp="2025-12-09 12:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:50.613609589 +0000 UTC m=+1463.174090640" watchObservedRunningTime="2025-12-09 12:30:50.631179625 +0000 UTC m=+1463.191660686" Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.636387 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgscz" event={"ID":"854860ea-b756-4edd-88ea-6f1ad333f7bc","Type":"ContainerStarted","Data":"9b9df0970da1affe9c1b1f6c1c0086bbcad110199673039f43bc7eedd8e3f436"} Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.647526 4970 generic.go:334] "Generic (PLEG): container finished" podID="d531e31f-a903-40b8-b91f-0579c272cb87" containerID="a70d18816834334e17601ae6c1f171e30ccf792a075a7a3c0dbdbfa58989b97d" exitCode=0 Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.648364 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" event={"ID":"d531e31f-a903-40b8-b91f-0579c272cb87","Type":"ContainerStarted","Data":"cc5397017d026aa15653bab3c0a6d6dfe3d0f4f721a579d13634247136198f1e"} Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.648447 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.648467 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" event={"ID":"d531e31f-a903-40b8-b91f-0579c272cb87","Type":"ContainerDied","Data":"a70d18816834334e17601ae6c1f171e30ccf792a075a7a3c0dbdbfa58989b97d"} Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.675893 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dgscz" podStartSLOduration=2.675873964 podStartE2EDuration="2.675873964s" podCreationTimestamp="2025-12-09 12:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:51.662574151 +0000 UTC m=+1464.223055202" watchObservedRunningTime="2025-12-09 12:30:51.675873964 +0000 UTC m=+1464.236355015" Dec 09 12:30:51 crc kubenswrapper[4970]: I1209 12:30:51.697140 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" podStartSLOduration=3.697119238 podStartE2EDuration="3.697119238s" podCreationTimestamp="2025-12-09 12:30:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:51.69190557 +0000 UTC m=+1464.252386641" watchObservedRunningTime="2025-12-09 12:30:51.697119238 +0000 UTC m=+1464.257600289" Dec 09 12:30:52 crc kubenswrapper[4970]: I1209 12:30:52.616157 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:30:52 crc kubenswrapper[4970]: I1209 12:30:52.629183 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.717976 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a","Type":"ContainerStarted","Data":"54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6"} Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.718770 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6" gracePeriod=30 Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.724512 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d360ed-aab2-478f-985d-eef9214facf2","Type":"ContainerStarted","Data":"91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7"} Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.734264 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"adece173-10f9-4c86-aa83-2626d0bc526c","Type":"ContainerStarted","Data":"0f59d5b878c07e576ca2088127160410486673ce9e905137d202896bc0c8e4a6"} Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.734343 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"adece173-10f9-4c86-aa83-2626d0bc526c","Type":"ContainerStarted","Data":"16680955d0877f538c5d5884e46d9c483b9a6ea4955dab5feef8cb2bb473df71"} Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.734512 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-log" containerID="cri-o://16680955d0877f538c5d5884e46d9c483b9a6ea4955dab5feef8cb2bb473df71" gracePeriod=30 Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.734658 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-metadata" containerID="cri-o://0f59d5b878c07e576ca2088127160410486673ce9e905137d202896bc0c8e4a6" gracePeriod=30 Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.745487 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95","Type":"ContainerStarted","Data":"5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5"} Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.745540 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95","Type":"ContainerStarted","Data":"95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b"} Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.769909 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.824580239 podStartE2EDuration="6.769883766s" podCreationTimestamp="2025-12-09 12:30:48 +0000 UTC" firstStartedPulling="2025-12-09 12:30:49.577075587 +0000 UTC m=+1462.137556638" lastFinishedPulling="2025-12-09 12:30:53.522379114 +0000 UTC m=+1466.082860165" observedRunningTime="2025-12-09 12:30:54.745732715 +0000 UTC m=+1467.306213766" watchObservedRunningTime="2025-12-09 12:30:54.769883766 +0000 UTC m=+1467.330364817" Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.792172 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.837388969 podStartE2EDuration="6.792146777s" podCreationTimestamp="2025-12-09 12:30:48 +0000 UTC" firstStartedPulling="2025-12-09 12:30:49.576994215 +0000 UTC m=+1462.137475266" lastFinishedPulling="2025-12-09 12:30:53.531752023 +0000 UTC m=+1466.092233074" observedRunningTime="2025-12-09 12:30:54.771703564 +0000 UTC m=+1467.332184615" watchObservedRunningTime="2025-12-09 12:30:54.792146777 +0000 UTC m=+1467.352627828" Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.802783 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.216569393 podStartE2EDuration="6.802757728s" podCreationTimestamp="2025-12-09 12:30:48 +0000 UTC" firstStartedPulling="2025-12-09 12:30:49.942732612 +0000 UTC m=+1462.503213663" lastFinishedPulling="2025-12-09 12:30:53.528920937 +0000 UTC m=+1466.089401998" observedRunningTime="2025-12-09 12:30:54.787505143 +0000 UTC m=+1467.347986194" watchObservedRunningTime="2025-12-09 12:30:54.802757728 +0000 UTC m=+1467.363238789" Dec 09 12:30:54 crc kubenswrapper[4970]: I1209 12:30:54.812792 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.299741721 podStartE2EDuration="6.812769514s" podCreationTimestamp="2025-12-09 12:30:48 +0000 UTC" firstStartedPulling="2025-12-09 12:30:50.013054049 +0000 UTC m=+1462.573535100" lastFinishedPulling="2025-12-09 12:30:53.526081842 +0000 UTC m=+1466.086562893" observedRunningTime="2025-12-09 12:30:54.806888808 +0000 UTC m=+1467.367369859" watchObservedRunningTime="2025-12-09 12:30:54.812769514 +0000 UTC m=+1467.373250565" Dec 09 12:30:55 crc kubenswrapper[4970]: I1209 12:30:55.763697 4970 generic.go:334] "Generic (PLEG): container finished" podID="adece173-10f9-4c86-aa83-2626d0bc526c" containerID="0f59d5b878c07e576ca2088127160410486673ce9e905137d202896bc0c8e4a6" exitCode=0 Dec 09 12:30:55 crc kubenswrapper[4970]: I1209 12:30:55.764027 4970 generic.go:334] "Generic (PLEG): container finished" podID="adece173-10f9-4c86-aa83-2626d0bc526c" containerID="16680955d0877f538c5d5884e46d9c483b9a6ea4955dab5feef8cb2bb473df71" exitCode=143 Dec 09 12:30:55 crc kubenswrapper[4970]: I1209 12:30:55.763899 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"adece173-10f9-4c86-aa83-2626d0bc526c","Type":"ContainerDied","Data":"0f59d5b878c07e576ca2088127160410486673ce9e905137d202896bc0c8e4a6"} Dec 09 12:30:55 crc kubenswrapper[4970]: I1209 12:30:55.764315 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"adece173-10f9-4c86-aa83-2626d0bc526c","Type":"ContainerDied","Data":"16680955d0877f538c5d5884e46d9c483b9a6ea4955dab5feef8cb2bb473df71"} Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.282927 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.308282 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adece173-10f9-4c86-aa83-2626d0bc526c-logs\") pod \"adece173-10f9-4c86-aa83-2626d0bc526c\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.308338 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-config-data\") pod \"adece173-10f9-4c86-aa83-2626d0bc526c\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.308510 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jmkp\" (UniqueName: \"kubernetes.io/projected/adece173-10f9-4c86-aa83-2626d0bc526c-kube-api-access-4jmkp\") pod \"adece173-10f9-4c86-aa83-2626d0bc526c\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.308596 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-combined-ca-bundle\") pod \"adece173-10f9-4c86-aa83-2626d0bc526c\" (UID: \"adece173-10f9-4c86-aa83-2626d0bc526c\") " Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.313894 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adece173-10f9-4c86-aa83-2626d0bc526c-logs" (OuterVolumeSpecName: "logs") pod "adece173-10f9-4c86-aa83-2626d0bc526c" (UID: "adece173-10f9-4c86-aa83-2626d0bc526c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.319024 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adece173-10f9-4c86-aa83-2626d0bc526c-kube-api-access-4jmkp" (OuterVolumeSpecName: "kube-api-access-4jmkp") pod "adece173-10f9-4c86-aa83-2626d0bc526c" (UID: "adece173-10f9-4c86-aa83-2626d0bc526c"). InnerVolumeSpecName "kube-api-access-4jmkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.413853 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adece173-10f9-4c86-aa83-2626d0bc526c-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.413886 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jmkp\" (UniqueName: \"kubernetes.io/projected/adece173-10f9-4c86-aa83-2626d0bc526c-kube-api-access-4jmkp\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.418408 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-config-data" (OuterVolumeSpecName: "config-data") pod "adece173-10f9-4c86-aa83-2626d0bc526c" (UID: "adece173-10f9-4c86-aa83-2626d0bc526c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.441354 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adece173-10f9-4c86-aa83-2626d0bc526c" (UID: "adece173-10f9-4c86-aa83-2626d0bc526c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.516988 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.517016 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adece173-10f9-4c86-aa83-2626d0bc526c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.776632 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"adece173-10f9-4c86-aa83-2626d0bc526c","Type":"ContainerDied","Data":"7a6897c11cd7c8dd41c439b33fdfa5442e65392633dd661d59b69c959d18054d"} Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.776685 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.776688 4970 scope.go:117] "RemoveContainer" containerID="0f59d5b878c07e576ca2088127160410486673ce9e905137d202896bc0c8e4a6" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.824785 4970 scope.go:117] "RemoveContainer" containerID="16680955d0877f538c5d5884e46d9c483b9a6ea4955dab5feef8cb2bb473df71" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.851888 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.875136 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.892996 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:56 crc kubenswrapper[4970]: E1209 12:30:56.893891 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-log" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.893976 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-log" Dec 09 12:30:56 crc kubenswrapper[4970]: E1209 12:30:56.894079 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-metadata" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.894141 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-metadata" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.894518 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-metadata" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.894599 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" containerName="nova-metadata-log" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.896193 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.900534 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.900536 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.906736 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.936884 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a77809-92d3-4627-8adb-2654a71586e8-logs\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.936948 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-config-data\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.937072 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.937197 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:56 crc kubenswrapper[4970]: I1209 12:30:56.937352 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9wmq\" (UniqueName: \"kubernetes.io/projected/35a77809-92d3-4627-8adb-2654a71586e8-kube-api-access-x9wmq\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.040476 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.040580 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9wmq\" (UniqueName: \"kubernetes.io/projected/35a77809-92d3-4627-8adb-2654a71586e8-kube-api-access-x9wmq\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.040697 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a77809-92d3-4627-8adb-2654a71586e8-logs\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.040726 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-config-data\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.040786 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.048501 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a77809-92d3-4627-8adb-2654a71586e8-logs\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.049773 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.050961 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-config-data\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.052746 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.078875 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9wmq\" (UniqueName: \"kubernetes.io/projected/35a77809-92d3-4627-8adb-2654a71586e8-kube-api-access-x9wmq\") pod \"nova-metadata-0\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: E1209 12:30:57.121779 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podadece173_10f9_4c86_aa83_2626d0bc526c.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.221876 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.767826 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:30:57 crc kubenswrapper[4970]: W1209 12:30:57.785734 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35a77809_92d3_4627_8adb_2654a71586e8.slice/crio-5a9c5f15767db425180611c75edf01f6418e3e6f04104c8d060f311cd4dddbdf WatchSource:0}: Error finding container 5a9c5f15767db425180611c75edf01f6418e3e6f04104c8d060f311cd4dddbdf: Status 404 returned error can't find the container with id 5a9c5f15767db425180611c75edf01f6418e3e6f04104c8d060f311cd4dddbdf Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.830165 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adece173-10f9-4c86-aa83-2626d0bc526c" path="/var/lib/kubelet/pods/adece173-10f9-4c86-aa83-2626d0bc526c/volumes" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.881915 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-5tqp5"] Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.883926 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.892847 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-5tqp5"] Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.906703 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-7d90-account-create-update-llgqt"] Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.908884 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.912676 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 09 12:30:57 crc kubenswrapper[4970]: I1209 12:30:57.961887 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7d90-account-create-update-llgqt"] Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.080471 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctc7n\" (UniqueName: \"kubernetes.io/projected/b23be82c-e164-4128-9b65-dd173e1db58b-kube-api-access-ctc7n\") pod \"aodh-7d90-account-create-update-llgqt\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.080771 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcwhp\" (UniqueName: \"kubernetes.io/projected/3430931f-908e-4e62-8711-e8e8e73d9334-kube-api-access-fcwhp\") pod \"aodh-db-create-5tqp5\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.080865 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b23be82c-e164-4128-9b65-dd173e1db58b-operator-scripts\") pod \"aodh-7d90-account-create-update-llgqt\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.080906 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3430931f-908e-4e62-8711-e8e8e73d9334-operator-scripts\") pod \"aodh-db-create-5tqp5\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.182456 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b23be82c-e164-4128-9b65-dd173e1db58b-operator-scripts\") pod \"aodh-7d90-account-create-update-llgqt\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.182500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3430931f-908e-4e62-8711-e8e8e73d9334-operator-scripts\") pod \"aodh-db-create-5tqp5\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.182601 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctc7n\" (UniqueName: \"kubernetes.io/projected/b23be82c-e164-4128-9b65-dd173e1db58b-kube-api-access-ctc7n\") pod \"aodh-7d90-account-create-update-llgqt\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.182770 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcwhp\" (UniqueName: \"kubernetes.io/projected/3430931f-908e-4e62-8711-e8e8e73d9334-kube-api-access-fcwhp\") pod \"aodh-db-create-5tqp5\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.183805 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b23be82c-e164-4128-9b65-dd173e1db58b-operator-scripts\") pod \"aodh-7d90-account-create-update-llgqt\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.184309 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3430931f-908e-4e62-8711-e8e8e73d9334-operator-scripts\") pod \"aodh-db-create-5tqp5\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.204708 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcwhp\" (UniqueName: \"kubernetes.io/projected/3430931f-908e-4e62-8711-e8e8e73d9334-kube-api-access-fcwhp\") pod \"aodh-db-create-5tqp5\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.205368 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctc7n\" (UniqueName: \"kubernetes.io/projected/b23be82c-e164-4128-9b65-dd173e1db58b-kube-api-access-ctc7n\") pod \"aodh-7d90-account-create-update-llgqt\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.317665 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5tqp5" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.341445 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.700460 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.700961 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.739586 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.840308 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a77809-92d3-4627-8adb-2654a71586e8","Type":"ContainerStarted","Data":"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9"} Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.840347 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a77809-92d3-4627-8adb-2654a71586e8","Type":"ContainerStarted","Data":"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba"} Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.840358 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a77809-92d3-4627-8adb-2654a71586e8","Type":"ContainerStarted","Data":"5a9c5f15767db425180611c75edf01f6418e3e6f04104c8d060f311cd4dddbdf"} Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.869235 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8692161609999998 podStartE2EDuration="2.869216161s" podCreationTimestamp="2025-12-09 12:30:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:58.86879126 +0000 UTC m=+1471.429272321" watchObservedRunningTime="2025-12-09 12:30:58.869216161 +0000 UTC m=+1471.429697212" Dec 09 12:30:58 crc kubenswrapper[4970]: I1209 12:30:58.909102 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-5tqp5"] Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.071677 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.071713 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.100706 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-7d90-account-create-update-llgqt"] Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.112333 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.217052 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.339203 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5ctr9"] Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.343342 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" containerName="dnsmasq-dns" containerID="cri-o://258cea5cfa792915c382b3faa727ea7d7c5cc9aab9fcf2a628fd5613341be154" gracePeriod=10 Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.788434 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.230:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.788623 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.230:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.871600 4970 generic.go:334] "Generic (PLEG): container finished" podID="e36a2df8-01ba-4868-8903-8b753488ea78" containerID="258cea5cfa792915c382b3faa727ea7d7c5cc9aab9fcf2a628fd5613341be154" exitCode=0 Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.871719 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" event={"ID":"e36a2df8-01ba-4868-8903-8b753488ea78","Type":"ContainerDied","Data":"258cea5cfa792915c382b3faa727ea7d7c5cc9aab9fcf2a628fd5613341be154"} Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.888825 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5tqp5" event={"ID":"3430931f-908e-4e62-8711-e8e8e73d9334","Type":"ContainerStarted","Data":"9e665c463f13323cda9ea7acc65f175a47e4aec7431f7f9462a89b721e70e4e0"} Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.888916 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5tqp5" event={"ID":"3430931f-908e-4e62-8711-e8e8e73d9334","Type":"ContainerStarted","Data":"c107d20188c3c850c63dc2d5094d5457d3f293b9f42b763f53bc631bf0be56d8"} Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.902205 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7d90-account-create-update-llgqt" event={"ID":"b23be82c-e164-4128-9b65-dd173e1db58b","Type":"ContainerStarted","Data":"65159f9d640cbebd055aa85c5fedd9135fdbea41d060e380f315b07df21c9d56"} Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.902262 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7d90-account-create-update-llgqt" event={"ID":"b23be82c-e164-4128-9b65-dd173e1db58b","Type":"ContainerStarted","Data":"3c245114a36edd836952c2eddab92788c4f509158f8371aa982700514e670b54"} Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.921376 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-5tqp5" podStartSLOduration=2.921355166 podStartE2EDuration="2.921355166s" podCreationTimestamp="2025-12-09 12:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:59.911035862 +0000 UTC m=+1472.471516903" watchObservedRunningTime="2025-12-09 12:30:59.921355166 +0000 UTC m=+1472.481836217" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.963675 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-7d90-account-create-update-llgqt" podStartSLOduration=2.963646828 podStartE2EDuration="2.963646828s" podCreationTimestamp="2025-12-09 12:30:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:30:59.93057614 +0000 UTC m=+1472.491057191" watchObservedRunningTime="2025-12-09 12:30:59.963646828 +0000 UTC m=+1472.524127879" Dec 09 12:30:59 crc kubenswrapper[4970]: I1209 12:30:59.988294 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.131451 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.218855 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbnhd\" (UniqueName: \"kubernetes.io/projected/e36a2df8-01ba-4868-8903-8b753488ea78-kube-api-access-wbnhd\") pod \"e36a2df8-01ba-4868-8903-8b753488ea78\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.218981 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-sb\") pod \"e36a2df8-01ba-4868-8903-8b753488ea78\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.219082 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-nb\") pod \"e36a2df8-01ba-4868-8903-8b753488ea78\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.219122 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-config\") pod \"e36a2df8-01ba-4868-8903-8b753488ea78\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.219182 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-svc\") pod \"e36a2df8-01ba-4868-8903-8b753488ea78\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.219375 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-swift-storage-0\") pod \"e36a2df8-01ba-4868-8903-8b753488ea78\" (UID: \"e36a2df8-01ba-4868-8903-8b753488ea78\") " Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.226628 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36a2df8-01ba-4868-8903-8b753488ea78-kube-api-access-wbnhd" (OuterVolumeSpecName: "kube-api-access-wbnhd") pod "e36a2df8-01ba-4868-8903-8b753488ea78" (UID: "e36a2df8-01ba-4868-8903-8b753488ea78"). InnerVolumeSpecName "kube-api-access-wbnhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.286948 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e36a2df8-01ba-4868-8903-8b753488ea78" (UID: "e36a2df8-01ba-4868-8903-8b753488ea78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.312925 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e36a2df8-01ba-4868-8903-8b753488ea78" (UID: "e36a2df8-01ba-4868-8903-8b753488ea78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.317968 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e36a2df8-01ba-4868-8903-8b753488ea78" (UID: "e36a2df8-01ba-4868-8903-8b753488ea78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.319126 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e36a2df8-01ba-4868-8903-8b753488ea78" (UID: "e36a2df8-01ba-4868-8903-8b753488ea78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.322695 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbnhd\" (UniqueName: \"kubernetes.io/projected/e36a2df8-01ba-4868-8903-8b753488ea78-kube-api-access-wbnhd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.322724 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.322795 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.322805 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.322817 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.346059 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-config" (OuterVolumeSpecName: "config") pod "e36a2df8-01ba-4868-8903-8b753488ea78" (UID: "e36a2df8-01ba-4868-8903-8b753488ea78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.424664 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e36a2df8-01ba-4868-8903-8b753488ea78-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.914781 4970 generic.go:334] "Generic (PLEG): container finished" podID="8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" containerID="7ac67b7728e2e1ecb824af3fa21339f8b5a1397f3fb341ba9a20469473e853c6" exitCode=0 Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.914863 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xx4x2" event={"ID":"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b","Type":"ContainerDied","Data":"7ac67b7728e2e1ecb824af3fa21339f8b5a1397f3fb341ba9a20469473e853c6"} Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.918073 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" event={"ID":"e36a2df8-01ba-4868-8903-8b753488ea78","Type":"ContainerDied","Data":"2fc4b2647ad75948d033ce816cad22b2f745538d84240a5e38452c9b03ad6509"} Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.918170 4970 scope.go:117] "RemoveContainer" containerID="258cea5cfa792915c382b3faa727ea7d7c5cc9aab9fcf2a628fd5613341be154" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.918093 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-5ctr9" Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.920879 4970 generic.go:334] "Generic (PLEG): container finished" podID="3430931f-908e-4e62-8711-e8e8e73d9334" containerID="9e665c463f13323cda9ea7acc65f175a47e4aec7431f7f9462a89b721e70e4e0" exitCode=0 Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.921007 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5tqp5" event={"ID":"3430931f-908e-4e62-8711-e8e8e73d9334","Type":"ContainerDied","Data":"9e665c463f13323cda9ea7acc65f175a47e4aec7431f7f9462a89b721e70e4e0"} Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.929924 4970 generic.go:334] "Generic (PLEG): container finished" podID="b23be82c-e164-4128-9b65-dd173e1db58b" containerID="65159f9d640cbebd055aa85c5fedd9135fdbea41d060e380f315b07df21c9d56" exitCode=0 Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.931091 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7d90-account-create-update-llgqt" event={"ID":"b23be82c-e164-4128-9b65-dd173e1db58b","Type":"ContainerDied","Data":"65159f9d640cbebd055aa85c5fedd9135fdbea41d060e380f315b07df21c9d56"} Dec 09 12:31:00 crc kubenswrapper[4970]: I1209 12:31:00.972220 4970 scope.go:117] "RemoveContainer" containerID="fa317a5adf03adfe5fa2c741c5ef850b1b5231994e1df3c50dbb109bf4fe34e5" Dec 09 12:31:01 crc kubenswrapper[4970]: I1209 12:31:01.017742 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5ctr9"] Dec 09 12:31:01 crc kubenswrapper[4970]: I1209 12:31:01.029538 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-5ctr9"] Dec 09 12:31:01 crc kubenswrapper[4970]: I1209 12:31:01.824037 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" path="/var/lib/kubelet/pods/e36a2df8-01ba-4868-8903-8b753488ea78/volumes" Dec 09 12:31:01 crc kubenswrapper[4970]: I1209 12:31:01.946163 4970 generic.go:334] "Generic (PLEG): container finished" podID="854860ea-b756-4edd-88ea-6f1ad333f7bc" containerID="9b9df0970da1affe9c1b1f6c1c0086bbcad110199673039f43bc7eedd8e3f436" exitCode=0 Dec 09 12:31:01 crc kubenswrapper[4970]: I1209 12:31:01.946287 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgscz" event={"ID":"854860ea-b756-4edd-88ea-6f1ad333f7bc","Type":"ContainerDied","Data":"9b9df0970da1affe9c1b1f6c1c0086bbcad110199673039f43bc7eedd8e3f436"} Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.235855 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.236157 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.619427 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.636115 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.639682 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5tqp5" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.688141 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-scripts\") pod \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.688367 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-config-data\") pod \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.688505 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz6rq\" (UniqueName: \"kubernetes.io/projected/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-kube-api-access-sz6rq\") pod \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.688541 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-combined-ca-bundle\") pod \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\" (UID: \"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.696893 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-kube-api-access-sz6rq" (OuterVolumeSpecName: "kube-api-access-sz6rq") pod "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" (UID: "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b"). InnerVolumeSpecName "kube-api-access-sz6rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.700694 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-scripts" (OuterVolumeSpecName: "scripts") pod "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" (UID: "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.726812 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-config-data" (OuterVolumeSpecName: "config-data") pod "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" (UID: "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.748262 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" (UID: "8ed63438-7cd6-4d2f-b591-2e5a7c74a94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.790020 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctc7n\" (UniqueName: \"kubernetes.io/projected/b23be82c-e164-4128-9b65-dd173e1db58b-kube-api-access-ctc7n\") pod \"b23be82c-e164-4128-9b65-dd173e1db58b\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.790176 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3430931f-908e-4e62-8711-e8e8e73d9334-operator-scripts\") pod \"3430931f-908e-4e62-8711-e8e8e73d9334\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.790776 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3430931f-908e-4e62-8711-e8e8e73d9334-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3430931f-908e-4e62-8711-e8e8e73d9334" (UID: "3430931f-908e-4e62-8711-e8e8e73d9334"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.791364 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b23be82c-e164-4128-9b65-dd173e1db58b-operator-scripts\") pod \"b23be82c-e164-4128-9b65-dd173e1db58b\" (UID: \"b23be82c-e164-4128-9b65-dd173e1db58b\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.791673 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcwhp\" (UniqueName: \"kubernetes.io/projected/3430931f-908e-4e62-8711-e8e8e73d9334-kube-api-access-fcwhp\") pod \"3430931f-908e-4e62-8711-e8e8e73d9334\" (UID: \"3430931f-908e-4e62-8711-e8e8e73d9334\") " Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.791766 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23be82c-e164-4128-9b65-dd173e1db58b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b23be82c-e164-4128-9b65-dd173e1db58b" (UID: "b23be82c-e164-4128-9b65-dd173e1db58b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.793541 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.793567 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b23be82c-e164-4128-9b65-dd173e1db58b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.793581 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.793595 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz6rq\" (UniqueName: \"kubernetes.io/projected/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-kube-api-access-sz6rq\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.793607 4970 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3430931f-908e-4e62-8711-e8e8e73d9334-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.793620 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.794837 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23be82c-e164-4128-9b65-dd173e1db58b-kube-api-access-ctc7n" (OuterVolumeSpecName: "kube-api-access-ctc7n") pod "b23be82c-e164-4128-9b65-dd173e1db58b" (UID: "b23be82c-e164-4128-9b65-dd173e1db58b"). InnerVolumeSpecName "kube-api-access-ctc7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.795166 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3430931f-908e-4e62-8711-e8e8e73d9334-kube-api-access-fcwhp" (OuterVolumeSpecName: "kube-api-access-fcwhp") pod "3430931f-908e-4e62-8711-e8e8e73d9334" (UID: "3430931f-908e-4e62-8711-e8e8e73d9334"). InnerVolumeSpecName "kube-api-access-fcwhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.895885 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctc7n\" (UniqueName: \"kubernetes.io/projected/b23be82c-e164-4128-9b65-dd173e1db58b-kube-api-access-ctc7n\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.895922 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcwhp\" (UniqueName: \"kubernetes.io/projected/3430931f-908e-4e62-8711-e8e8e73d9334-kube-api-access-fcwhp\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.978726 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-xx4x2" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.978704 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-xx4x2" event={"ID":"8ed63438-7cd6-4d2f-b591-2e5a7c74a94b","Type":"ContainerDied","Data":"3a9790628799d0667aac1b81d273265c9892616ee7ae2037c87c3fc2ad95a0ba"} Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.978909 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a9790628799d0667aac1b81d273265c9892616ee7ae2037c87c3fc2ad95a0ba" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.985852 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5tqp5" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.985846 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5tqp5" event={"ID":"3430931f-908e-4e62-8711-e8e8e73d9334","Type":"ContainerDied","Data":"c107d20188c3c850c63dc2d5094d5457d3f293b9f42b763f53bc631bf0be56d8"} Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.986003 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c107d20188c3c850c63dc2d5094d5457d3f293b9f42b763f53bc631bf0be56d8" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.991064 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-7d90-account-create-update-llgqt" Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.991368 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-7d90-account-create-update-llgqt" event={"ID":"b23be82c-e164-4128-9b65-dd173e1db58b","Type":"ContainerDied","Data":"3c245114a36edd836952c2eddab92788c4f509158f8371aa982700514e670b54"} Dec 09 12:31:02 crc kubenswrapper[4970]: I1209 12:31:02.991419 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c245114a36edd836952c2eddab92788c4f509158f8371aa982700514e670b54" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.155724 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.156290 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-log" containerID="cri-o://95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.156359 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-api" containerID="cri-o://5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.169465 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.169700 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="92d360ed-aab2-478f-985d-eef9214facf2" containerName="nova-scheduler-scheduler" containerID="cri-o://91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.224571 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.224873 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-log" containerID="cri-o://37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.225006 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-metadata" containerID="cri-o://0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9" gracePeriod=30 Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.396914 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.516213 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m76xd\" (UniqueName: \"kubernetes.io/projected/854860ea-b756-4edd-88ea-6f1ad333f7bc-kube-api-access-m76xd\") pod \"854860ea-b756-4edd-88ea-6f1ad333f7bc\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.516442 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-config-data\") pod \"854860ea-b756-4edd-88ea-6f1ad333f7bc\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.516716 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-combined-ca-bundle\") pod \"854860ea-b756-4edd-88ea-6f1ad333f7bc\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.516761 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-scripts\") pod \"854860ea-b756-4edd-88ea-6f1ad333f7bc\" (UID: \"854860ea-b756-4edd-88ea-6f1ad333f7bc\") " Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.522294 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/854860ea-b756-4edd-88ea-6f1ad333f7bc-kube-api-access-m76xd" (OuterVolumeSpecName: "kube-api-access-m76xd") pod "854860ea-b756-4edd-88ea-6f1ad333f7bc" (UID: "854860ea-b756-4edd-88ea-6f1ad333f7bc"). InnerVolumeSpecName "kube-api-access-m76xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.525492 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-scripts" (OuterVolumeSpecName: "scripts") pod "854860ea-b756-4edd-88ea-6f1ad333f7bc" (UID: "854860ea-b756-4edd-88ea-6f1ad333f7bc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.552639 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "854860ea-b756-4edd-88ea-6f1ad333f7bc" (UID: "854860ea-b756-4edd-88ea-6f1ad333f7bc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.562762 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-config-data" (OuterVolumeSpecName: "config-data") pod "854860ea-b756-4edd-88ea-6f1ad333f7bc" (UID: "854860ea-b756-4edd-88ea-6f1ad333f7bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.620437 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.620473 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.620485 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m76xd\" (UniqueName: \"kubernetes.io/projected/854860ea-b756-4edd-88ea-6f1ad333f7bc-kube-api-access-m76xd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.620498 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/854860ea-b756-4edd-88ea-6f1ad333f7bc-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:03 crc kubenswrapper[4970]: I1209 12:31:03.895772 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.007341 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dgscz" event={"ID":"854860ea-b756-4edd-88ea-6f1ad333f7bc","Type":"ContainerDied","Data":"b97d3b705613db43350391e0663cac8325cfbd255591cc4e1114d0cafcbc1504"} Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.007398 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b97d3b705613db43350391e0663cac8325cfbd255591cc4e1114d0cafcbc1504" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.007691 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dgscz" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.009372 4970 generic.go:334] "Generic (PLEG): container finished" podID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerID="95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b" exitCode=143 Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.009441 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95","Type":"ContainerDied","Data":"95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b"} Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012154 4970 generic.go:334] "Generic (PLEG): container finished" podID="35a77809-92d3-4627-8adb-2654a71586e8" containerID="0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9" exitCode=0 Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012188 4970 generic.go:334] "Generic (PLEG): container finished" podID="35a77809-92d3-4627-8adb-2654a71586e8" containerID="37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba" exitCode=143 Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012213 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a77809-92d3-4627-8adb-2654a71586e8","Type":"ContainerDied","Data":"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9"} Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012268 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a77809-92d3-4627-8adb-2654a71586e8","Type":"ContainerDied","Data":"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba"} Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012289 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"35a77809-92d3-4627-8adb-2654a71586e8","Type":"ContainerDied","Data":"5a9c5f15767db425180611c75edf01f6418e3e6f04104c8d060f311cd4dddbdf"} Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012384 4970 scope.go:117] "RemoveContainer" containerID="0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.012447 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.038883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-config-data\") pod \"35a77809-92d3-4627-8adb-2654a71586e8\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.039041 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-combined-ca-bundle\") pod \"35a77809-92d3-4627-8adb-2654a71586e8\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.039183 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a77809-92d3-4627-8adb-2654a71586e8-logs\") pod \"35a77809-92d3-4627-8adb-2654a71586e8\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.039212 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-nova-metadata-tls-certs\") pod \"35a77809-92d3-4627-8adb-2654a71586e8\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.039240 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9wmq\" (UniqueName: \"kubernetes.io/projected/35a77809-92d3-4627-8adb-2654a71586e8-kube-api-access-x9wmq\") pod \"35a77809-92d3-4627-8adb-2654a71586e8\" (UID: \"35a77809-92d3-4627-8adb-2654a71586e8\") " Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.039848 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35a77809-92d3-4627-8adb-2654a71586e8-logs" (OuterVolumeSpecName: "logs") pod "35a77809-92d3-4627-8adb-2654a71586e8" (UID: "35a77809-92d3-4627-8adb-2654a71586e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.040883 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35a77809-92d3-4627-8adb-2654a71586e8-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.050141 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35a77809-92d3-4627-8adb-2654a71586e8-kube-api-access-x9wmq" (OuterVolumeSpecName: "kube-api-access-x9wmq") pod "35a77809-92d3-4627-8adb-2654a71586e8" (UID: "35a77809-92d3-4627-8adb-2654a71586e8"). InnerVolumeSpecName "kube-api-access-x9wmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.053126 4970 scope.go:117] "RemoveContainer" containerID="37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.054953 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.055693 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-log" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.055741 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-log" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.055772 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3430931f-908e-4e62-8711-e8e8e73d9334" containerName="mariadb-database-create" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.055822 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3430931f-908e-4e62-8711-e8e8e73d9334" containerName="mariadb-database-create" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.055856 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" containerName="init" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.055868 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" containerName="init" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.055921 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" containerName="dnsmasq-dns" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.055933 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" containerName="dnsmasq-dns" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.055944 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-metadata" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.055950 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-metadata" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.055991 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="854860ea-b756-4edd-88ea-6f1ad333f7bc" containerName="nova-cell1-conductor-db-sync" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.056000 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="854860ea-b756-4edd-88ea-6f1ad333f7bc" containerName="nova-cell1-conductor-db-sync" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.056029 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23be82c-e164-4128-9b65-dd173e1db58b" containerName="mariadb-account-create-update" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.056038 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23be82c-e164-4128-9b65-dd173e1db58b" containerName="mariadb-account-create-update" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.056099 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" containerName="nova-manage" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.056108 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" containerName="nova-manage" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057357 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23be82c-e164-4128-9b65-dd173e1db58b" containerName="mariadb-account-create-update" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057385 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" containerName="nova-manage" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057413 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-log" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057432 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36a2df8-01ba-4868-8903-8b753488ea78" containerName="dnsmasq-dns" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057459 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="854860ea-b756-4edd-88ea-6f1ad333f7bc" containerName="nova-cell1-conductor-db-sync" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057478 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3430931f-908e-4e62-8711-e8e8e73d9334" containerName="mariadb-database-create" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.057496 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="35a77809-92d3-4627-8adb-2654a71586e8" containerName="nova-metadata-metadata" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.062611 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.067236 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.078654 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.090267 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35a77809-92d3-4627-8adb-2654a71586e8" (UID: "35a77809-92d3-4627-8adb-2654a71586e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.090810 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.092623 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.092688 4970 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="92d360ed-aab2-478f-985d-eef9214facf2" containerName="nova-scheduler-scheduler" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.094981 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-config-data" (OuterVolumeSpecName: "config-data") pod "35a77809-92d3-4627-8adb-2654a71586e8" (UID: "35a77809-92d3-4627-8adb-2654a71586e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.098026 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.104083 4970 scope.go:117] "RemoveContainer" containerID="0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.104442 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9\": container with ID starting with 0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9 not found: ID does not exist" containerID="0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.104634 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9"} err="failed to get container status \"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9\": rpc error: code = NotFound desc = could not find container \"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9\": container with ID starting with 0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9 not found: ID does not exist" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.104661 4970 scope.go:117] "RemoveContainer" containerID="37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba" Dec 09 12:31:04 crc kubenswrapper[4970]: E1209 12:31:04.105153 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba\": container with ID starting with 37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba not found: ID does not exist" containerID="37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.105188 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba"} err="failed to get container status \"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba\": rpc error: code = NotFound desc = could not find container \"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba\": container with ID starting with 37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba not found: ID does not exist" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.105219 4970 scope.go:117] "RemoveContainer" containerID="0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.105904 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9"} err="failed to get container status \"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9\": rpc error: code = NotFound desc = could not find container \"0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9\": container with ID starting with 0cfc0ae2fe82386cdd88bbef4aa6581a80f2bb83e2127454ef0a5d5da9bcf9a9 not found: ID does not exist" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.105941 4970 scope.go:117] "RemoveContainer" containerID="37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.106190 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba"} err="failed to get container status \"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba\": rpc error: code = NotFound desc = could not find container \"37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba\": container with ID starting with 37acdd02f1a635f33a7df598fe67d0d20a7f2804d93d1682da17c74254078fba not found: ID does not exist" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.128381 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "35a77809-92d3-4627-8adb-2654a71586e8" (UID: "35a77809-92d3-4627-8adb-2654a71586e8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142484 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95c54\" (UniqueName: \"kubernetes.io/projected/98c2dbed-449e-4db4-9f2b-5191b03c8a80-kube-api-access-95c54\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142542 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c2dbed-449e-4db4-9f2b-5191b03c8a80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142594 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98c2dbed-449e-4db4-9f2b-5191b03c8a80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142693 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142705 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142715 4970 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/35a77809-92d3-4627-8adb-2654a71586e8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.142724 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9wmq\" (UniqueName: \"kubernetes.io/projected/35a77809-92d3-4627-8adb-2654a71586e8-kube-api-access-x9wmq\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.244894 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95c54\" (UniqueName: \"kubernetes.io/projected/98c2dbed-449e-4db4-9f2b-5191b03c8a80-kube-api-access-95c54\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.244959 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c2dbed-449e-4db4-9f2b-5191b03c8a80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.245004 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98c2dbed-449e-4db4-9f2b-5191b03c8a80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.249451 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98c2dbed-449e-4db4-9f2b-5191b03c8a80-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.249900 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98c2dbed-449e-4db4-9f2b-5191b03c8a80-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.262329 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95c54\" (UniqueName: \"kubernetes.io/projected/98c2dbed-449e-4db4-9f2b-5191b03c8a80-kube-api-access-95c54\") pod \"nova-cell1-conductor-0\" (UID: \"98c2dbed-449e-4db4-9f2b-5191b03c8a80\") " pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.361313 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.401705 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.402838 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.476042 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.485455 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.491373 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.491597 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.517445 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.656946 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftvdz\" (UniqueName: \"kubernetes.io/projected/da82b114-dc90-4454-9b42-711065681a68-kube-api-access-ftvdz\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.657008 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.657058 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.657097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da82b114-dc90-4454-9b42-711065681a68-logs\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.657628 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-config-data\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.759780 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-config-data\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.759884 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftvdz\" (UniqueName: \"kubernetes.io/projected/da82b114-dc90-4454-9b42-711065681a68-kube-api-access-ftvdz\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.759906 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.759939 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.759965 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da82b114-dc90-4454-9b42-711065681a68-logs\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.760640 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da82b114-dc90-4454-9b42-711065681a68-logs\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.773698 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-config-data\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.774411 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.775008 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.792521 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftvdz\" (UniqueName: \"kubernetes.io/projected/da82b114-dc90-4454-9b42-711065681a68-kube-api-access-ftvdz\") pod \"nova-metadata-0\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.878945 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.931383 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 12:31:04 crc kubenswrapper[4970]: W1209 12:31:04.933051 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98c2dbed_449e_4db4_9f2b_5191b03c8a80.slice/crio-1018629bbec338bdf4a6fe649b912201fe8c80ae65ccb9084430f1aec778c85d WatchSource:0}: Error finding container 1018629bbec338bdf4a6fe649b912201fe8c80ae65ccb9084430f1aec778c85d: Status 404 returned error can't find the container with id 1018629bbec338bdf4a6fe649b912201fe8c80ae65ccb9084430f1aec778c85d Dec 09 12:31:04 crc kubenswrapper[4970]: I1209 12:31:04.964370 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 09 12:31:05 crc kubenswrapper[4970]: I1209 12:31:05.031878 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"98c2dbed-449e-4db4-9f2b-5191b03c8a80","Type":"ContainerStarted","Data":"1018629bbec338bdf4a6fe649b912201fe8c80ae65ccb9084430f1aec778c85d"} Dec 09 12:31:05 crc kubenswrapper[4970]: W1209 12:31:05.365298 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda82b114_dc90_4454_9b42_711065681a68.slice/crio-fec98b4f8c5fc96bf533fc7fb76aa6cf866858fd3fd1b30597264bc11aafecec WatchSource:0}: Error finding container fec98b4f8c5fc96bf533fc7fb76aa6cf866858fd3fd1b30597264bc11aafecec: Status 404 returned error can't find the container with id fec98b4f8c5fc96bf533fc7fb76aa6cf866858fd3fd1b30597264bc11aafecec Dec 09 12:31:05 crc kubenswrapper[4970]: I1209 12:31:05.367845 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:05 crc kubenswrapper[4970]: I1209 12:31:05.831202 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35a77809-92d3-4627-8adb-2654a71586e8" path="/var/lib/kubelet/pods/35a77809-92d3-4627-8adb-2654a71586e8/volumes" Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.083117 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"98c2dbed-449e-4db4-9f2b-5191b03c8a80","Type":"ContainerStarted","Data":"2ab0deda95b783cae36ab71c1966811072569db563bf946c054150a9635575f0"} Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.083191 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.100346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"da82b114-dc90-4454-9b42-711065681a68","Type":"ContainerStarted","Data":"9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae"} Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.100556 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"da82b114-dc90-4454-9b42-711065681a68","Type":"ContainerStarted","Data":"58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a"} Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.100612 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"da82b114-dc90-4454-9b42-711065681a68","Type":"ContainerStarted","Data":"fec98b4f8c5fc96bf533fc7fb76aa6cf866858fd3fd1b30597264bc11aafecec"} Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.128070 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.128044684 podStartE2EDuration="2.128044684s" podCreationTimestamp="2025-12-09 12:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:06.105897606 +0000 UTC m=+1478.666378657" watchObservedRunningTime="2025-12-09 12:31:06.128044684 +0000 UTC m=+1478.688525745" Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.158982 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.1589579150000002 podStartE2EDuration="2.158957915s" podCreationTimestamp="2025-12-09 12:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:06.130635503 +0000 UTC m=+1478.691116554" watchObservedRunningTime="2025-12-09 12:31:06.158957915 +0000 UTC m=+1478.719438966" Dec 09 12:31:06 crc kubenswrapper[4970]: I1209 12:31:06.870640 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.018406 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-combined-ca-bundle\") pod \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.018499 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-config-data\") pod \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.018594 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g882t\" (UniqueName: \"kubernetes.io/projected/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-kube-api-access-g882t\") pod \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.018703 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-logs\") pod \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\" (UID: \"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.020613 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-logs" (OuterVolumeSpecName: "logs") pod "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" (UID: "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.025522 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-kube-api-access-g882t" (OuterVolumeSpecName: "kube-api-access-g882t") pod "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" (UID: "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95"). InnerVolumeSpecName "kube-api-access-g882t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.055145 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-config-data" (OuterVolumeSpecName: "config-data") pod "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" (UID: "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.064479 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" (UID: "c4cc45c7-93b9-4be2-98f7-3ac2d181ee95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.112161 4970 generic.go:334] "Generic (PLEG): container finished" podID="92d360ed-aab2-478f-985d-eef9214facf2" containerID="91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7" exitCode=0 Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.112221 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d360ed-aab2-478f-985d-eef9214facf2","Type":"ContainerDied","Data":"91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7"} Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.112260 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"92d360ed-aab2-478f-985d-eef9214facf2","Type":"ContainerDied","Data":"a872d377a90499459bdb4e0ae08734f55b5a67da0ad3608139bb3a0709feedab"} Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.112271 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a872d377a90499459bdb4e0ae08734f55b5a67da0ad3608139bb3a0709feedab" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.114318 4970 generic.go:334] "Generic (PLEG): container finished" podID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerID="5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5" exitCode=0 Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.115269 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.119411 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95","Type":"ContainerDied","Data":"5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5"} Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.119484 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c4cc45c7-93b9-4be2-98f7-3ac2d181ee95","Type":"ContainerDied","Data":"3d9c19431b411259a3c4cc0abea83447fe5b6a1aa99416797ab6e3c0f68f4ce1"} Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.119508 4970 scope.go:117] "RemoveContainer" containerID="5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.121015 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.121061 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.121071 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g882t\" (UniqueName: \"kubernetes.io/projected/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-kube-api-access-g882t\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.121082 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.165804 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.184086 4970 scope.go:117] "RemoveContainer" containerID="95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.193399 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.216795 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.227928 4970 scope.go:117] "RemoveContainer" containerID="5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5" Dec 09 12:31:07 crc kubenswrapper[4970]: E1209 12:31:07.228438 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5\": container with ID starting with 5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5 not found: ID does not exist" containerID="5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.228486 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5"} err="failed to get container status \"5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5\": rpc error: code = NotFound desc = could not find container \"5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5\": container with ID starting with 5251258e29df36d9204f7c420e4f0801862b2d62c485543f996a6878e5374db5 not found: ID does not exist" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.228516 4970 scope.go:117] "RemoveContainer" containerID="95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b" Dec 09 12:31:07 crc kubenswrapper[4970]: E1209 12:31:07.234184 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b\": container with ID starting with 95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b not found: ID does not exist" containerID="95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.234231 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b"} err="failed to get container status \"95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b\": rpc error: code = NotFound desc = could not find container \"95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b\": container with ID starting with 95706b6fee07f6803ccfb78c7c4c37fc7f7cd694e0307898274d367daac2616b not found: ID does not exist" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.248061 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:07 crc kubenswrapper[4970]: E1209 12:31:07.248669 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-log" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.248691 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-log" Dec 09 12:31:07 crc kubenswrapper[4970]: E1209 12:31:07.248704 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92d360ed-aab2-478f-985d-eef9214facf2" containerName="nova-scheduler-scheduler" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.248710 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="92d360ed-aab2-478f-985d-eef9214facf2" containerName="nova-scheduler-scheduler" Dec 09 12:31:07 crc kubenswrapper[4970]: E1209 12:31:07.248751 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-api" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.248758 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-api" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.248973 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-log" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.248998 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" containerName="nova-api-api" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.249015 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="92d360ed-aab2-478f-985d-eef9214facf2" containerName="nova-scheduler-scheduler" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.250325 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.256387 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.280330 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.324592 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-config-data\") pod \"92d360ed-aab2-478f-985d-eef9214facf2\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.324855 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9g8r\" (UniqueName: \"kubernetes.io/projected/92d360ed-aab2-478f-985d-eef9214facf2-kube-api-access-p9g8r\") pod \"92d360ed-aab2-478f-985d-eef9214facf2\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.324955 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-combined-ca-bundle\") pod \"92d360ed-aab2-478f-985d-eef9214facf2\" (UID: \"92d360ed-aab2-478f-985d-eef9214facf2\") " Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.325217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141ca3a5-7aad-4ab7-b4b7-613a531f696f-logs\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.325390 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlwm\" (UniqueName: \"kubernetes.io/projected/141ca3a5-7aad-4ab7-b4b7-613a531f696f-kube-api-access-zzlwm\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.325439 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-config-data\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.325493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.334680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92d360ed-aab2-478f-985d-eef9214facf2-kube-api-access-p9g8r" (OuterVolumeSpecName: "kube-api-access-p9g8r") pod "92d360ed-aab2-478f-985d-eef9214facf2" (UID: "92d360ed-aab2-478f-985d-eef9214facf2"). InnerVolumeSpecName "kube-api-access-p9g8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.360284 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92d360ed-aab2-478f-985d-eef9214facf2" (UID: "92d360ed-aab2-478f-985d-eef9214facf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.362839 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-config-data" (OuterVolumeSpecName: "config-data") pod "92d360ed-aab2-478f-985d-eef9214facf2" (UID: "92d360ed-aab2-478f-985d-eef9214facf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.426984 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzlwm\" (UniqueName: \"kubernetes.io/projected/141ca3a5-7aad-4ab7-b4b7-613a531f696f-kube-api-access-zzlwm\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427055 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-config-data\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427171 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141ca3a5-7aad-4ab7-b4b7-613a531f696f-logs\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427334 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427353 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9g8r\" (UniqueName: \"kubernetes.io/projected/92d360ed-aab2-478f-985d-eef9214facf2-kube-api-access-p9g8r\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427364 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92d360ed-aab2-478f-985d-eef9214facf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.427746 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141ca3a5-7aad-4ab7-b4b7-613a531f696f-logs\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.432673 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-config-data\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.435014 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.446154 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzlwm\" (UniqueName: \"kubernetes.io/projected/141ca3a5-7aad-4ab7-b4b7-613a531f696f-kube-api-access-zzlwm\") pod \"nova-api-0\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.569783 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:07 crc kubenswrapper[4970]: I1209 12:31:07.828089 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4cc45c7-93b9-4be2-98f7-3ac2d181ee95" path="/var/lib/kubelet/pods/c4cc45c7-93b9-4be2-98f7-3ac2d181ee95/volumes" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.062982 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:08 crc kubenswrapper[4970]: W1209 12:31:08.069299 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod141ca3a5_7aad_4ab7_b4b7_613a531f696f.slice/crio-add0332b0e42f4fbc0c43d9f0fcac2fe3fd987f47f78e6ced19588bd7f91e086 WatchSource:0}: Error finding container add0332b0e42f4fbc0c43d9f0fcac2fe3fd987f47f78e6ced19588bd7f91e086: Status 404 returned error can't find the container with id add0332b0e42f4fbc0c43d9f0fcac2fe3fd987f47f78e6ced19588bd7f91e086 Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.127414 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"141ca3a5-7aad-4ab7-b4b7-613a531f696f","Type":"ContainerStarted","Data":"add0332b0e42f4fbc0c43d9f0fcac2fe3fd987f47f78e6ced19588bd7f91e086"} Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.128935 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.171007 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.207129 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.227665 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.230077 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.232550 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.251416 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.317644 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-mjrlc"] Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.319236 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.332426 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.332512 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.332618 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.333182 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-99pll" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.353208 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mjrlc"] Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.357694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.357800 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j57g7\" (UniqueName: \"kubernetes.io/projected/8e215775-0586-4a94-a8b8-faae5dcf279b-kube-api-access-j57g7\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.357906 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-config-data\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.460174 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.460323 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmhq8\" (UniqueName: \"kubernetes.io/projected/48a3105b-e980-44c6-bb0d-a8db895867ee-kube-api-access-mmhq8\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.460364 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j57g7\" (UniqueName: \"kubernetes.io/projected/8e215775-0586-4a94-a8b8-faae5dcf279b-kube-api-access-j57g7\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.460611 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-scripts\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.460692 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-combined-ca-bundle\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.460988 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-config-data\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.461091 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-config-data\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.465089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.465400 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-config-data\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.481288 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j57g7\" (UniqueName: \"kubernetes.io/projected/8e215775-0586-4a94-a8b8-faae5dcf279b-kube-api-access-j57g7\") pod \"nova-scheduler-0\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.556676 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.562942 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmhq8\" (UniqueName: \"kubernetes.io/projected/48a3105b-e980-44c6-bb0d-a8db895867ee-kube-api-access-mmhq8\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.563083 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-scripts\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.563116 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-combined-ca-bundle\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.563960 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-config-data\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.566567 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-scripts\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.567463 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-config-data\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.567919 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-combined-ca-bundle\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.584671 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmhq8\" (UniqueName: \"kubernetes.io/projected/48a3105b-e980-44c6-bb0d-a8db895867ee-kube-api-access-mmhq8\") pod \"aodh-db-sync-mjrlc\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:08 crc kubenswrapper[4970]: I1209 12:31:08.653130 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.142167 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.173448 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"141ca3a5-7aad-4ab7-b4b7-613a531f696f","Type":"ContainerStarted","Data":"7ed6a21643ee12e2ee43085af9f08ab9be4b29a9964b0cb29051510417b464c1"} Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.173857 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"141ca3a5-7aad-4ab7-b4b7-613a531f696f","Type":"ContainerStarted","Data":"f82b3e3136d4d90f4ceef31c2446ce0602d64cf579c57156d462b8af46455c7b"} Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.200214 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.200174325 podStartE2EDuration="2.200174325s" podCreationTimestamp="2025-12-09 12:31:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:09.197293088 +0000 UTC m=+1481.757774139" watchObservedRunningTime="2025-12-09 12:31:09.200174325 +0000 UTC m=+1481.760655376" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.261847 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mjrlc"] Dec 09 12:31:09 crc kubenswrapper[4970]: W1209 12:31:09.262734 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48a3105b_e980_44c6_bb0d_a8db895867ee.slice/crio-64fda555ba0db9eafd900f7a2f18cd43e5e245c4fd3d11f2334b3aec84e7f7b3 WatchSource:0}: Error finding container 64fda555ba0db9eafd900f7a2f18cd43e5e245c4fd3d11f2334b3aec84e7f7b3: Status 404 returned error can't find the container with id 64fda555ba0db9eafd900f7a2f18cd43e5e245c4fd3d11f2334b3aec84e7f7b3 Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.796067 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.843168 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92d360ed-aab2-478f-985d-eef9214facf2" path="/var/lib/kubelet/pods/92d360ed-aab2-478f-985d-eef9214facf2/volumes" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.880135 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.880189 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.901816 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-run-httpd\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.901883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-sg-core-conf-yaml\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.901992 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-log-httpd\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.902061 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-config-data\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.902268 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-combined-ca-bundle\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.902291 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57ln4\" (UniqueName: \"kubernetes.io/projected/88c6c852-5b17-4dac-b7de-22891a28d17a-kube-api-access-57ln4\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.902377 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-scripts\") pod \"88c6c852-5b17-4dac-b7de-22891a28d17a\" (UID: \"88c6c852-5b17-4dac-b7de-22891a28d17a\") " Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.903143 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.904037 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.904464 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.908716 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c6c852-5b17-4dac-b7de-22891a28d17a-kube-api-access-57ln4" (OuterVolumeSpecName: "kube-api-access-57ln4") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "kube-api-access-57ln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.910854 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-scripts" (OuterVolumeSpecName: "scripts") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:09 crc kubenswrapper[4970]: I1209 12:31:09.966476 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.007103 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88c6c852-5b17-4dac-b7de-22891a28d17a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.008402 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.008472 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57ln4\" (UniqueName: \"kubernetes.io/projected/88c6c852-5b17-4dac-b7de-22891a28d17a-kube-api-access-57ln4\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.008498 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.040387 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.091552 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-config-data" (OuterVolumeSpecName: "config-data") pod "88c6c852-5b17-4dac-b7de-22891a28d17a" (UID: "88c6c852-5b17-4dac-b7de-22891a28d17a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.110073 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.110108 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88c6c852-5b17-4dac-b7de-22891a28d17a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.205300 4970 generic.go:334] "Generic (PLEG): container finished" podID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerID="910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87" exitCode=137 Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.205395 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerDied","Data":"910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87"} Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.205431 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88c6c852-5b17-4dac-b7de-22891a28d17a","Type":"ContainerDied","Data":"dd21f3671bbd7361d2d79fdd850f5f5726b25d5a40ade7fbda729c8fc4ebbb0e"} Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.205439 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.205453 4970 scope.go:117] "RemoveContainer" containerID="910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.212437 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjrlc" event={"ID":"48a3105b-e980-44c6-bb0d-a8db895867ee","Type":"ContainerStarted","Data":"64fda555ba0db9eafd900f7a2f18cd43e5e245c4fd3d11f2334b3aec84e7f7b3"} Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.224613 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e215775-0586-4a94-a8b8-faae5dcf279b","Type":"ContainerStarted","Data":"e005657ebb8c94bb91f81321483dcb5d4bbf04364ec571a7880c9cabd3ac483f"} Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.224685 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e215775-0586-4a94-a8b8-faae5dcf279b","Type":"ContainerStarted","Data":"9aab0f2788fc38ae8f610addc53c2d5d11c55976d635b85c17cff257390f2708"} Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.249411 4970 scope.go:117] "RemoveContainer" containerID="c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.275351 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.275324651 podStartE2EDuration="2.275324651s" podCreationTimestamp="2025-12-09 12:31:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:10.249605768 +0000 UTC m=+1482.810086819" watchObservedRunningTime="2025-12-09 12:31:10.275324651 +0000 UTC m=+1482.835805722" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.294996 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.300485 4970 scope.go:117] "RemoveContainer" containerID="4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.324537 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.332545 4970 scope.go:117] "RemoveContainer" containerID="a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.335318 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.335903 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="sg-core" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.335916 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="sg-core" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.335933 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="proxy-httpd" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.335940 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="proxy-httpd" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.335960 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-central-agent" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.335968 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-central-agent" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.336001 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-notification-agent" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.336008 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-notification-agent" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.336726 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="proxy-httpd" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.336761 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-notification-agent" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.336771 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="sg-core" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.336787 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" containerName="ceilometer-central-agent" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.339403 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.341921 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.342082 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.349113 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.388278 4970 scope.go:117] "RemoveContainer" containerID="910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.389816 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87\": container with ID starting with 910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87 not found: ID does not exist" containerID="910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.389857 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87"} err="failed to get container status \"910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87\": rpc error: code = NotFound desc = could not find container \"910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87\": container with ID starting with 910f3836c18a07d09f534826b9c39c293a5998a2cb77d6598528f417de697f87 not found: ID does not exist" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.389887 4970 scope.go:117] "RemoveContainer" containerID="c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.390309 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57\": container with ID starting with c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57 not found: ID does not exist" containerID="c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.390362 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57"} err="failed to get container status \"c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57\": rpc error: code = NotFound desc = could not find container \"c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57\": container with ID starting with c44b71ac79cef6e55b4376f1bdb9ff80f7343b8b19c3e811ce02e8a9d6efeb57 not found: ID does not exist" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.390390 4970 scope.go:117] "RemoveContainer" containerID="4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.390714 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745\": container with ID starting with 4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745 not found: ID does not exist" containerID="4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.390742 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745"} err="failed to get container status \"4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745\": rpc error: code = NotFound desc = could not find container \"4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745\": container with ID starting with 4114a93abdb98271dc7c6575c3585cf3d9609a45fe03068ffb227034368c9745 not found: ID does not exist" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.390760 4970 scope.go:117] "RemoveContainer" containerID="a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9" Dec 09 12:31:10 crc kubenswrapper[4970]: E1209 12:31:10.391038 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9\": container with ID starting with a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9 not found: ID does not exist" containerID="a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.391071 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9"} err="failed to get container status \"a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9\": rpc error: code = NotFound desc = could not find container \"a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9\": container with ID starting with a875bbea325c61ccc9feba304fe4fc902a8c4782c7715a8fe97cee1972d56da9 not found: ID does not exist" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.421824 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-log-httpd\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.422064 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-config-data\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.422151 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.422178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.422207 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-scripts\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.422229 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-run-httpd\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.422337 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8rt\" (UniqueName: \"kubernetes.io/projected/a206d099-424a-48ed-9c76-4539a74edcb4-kube-api-access-st8rt\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.534315 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.534437 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.534491 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-scripts\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.534538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-run-httpd\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.534576 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8rt\" (UniqueName: \"kubernetes.io/projected/a206d099-424a-48ed-9c76-4539a74edcb4-kube-api-access-st8rt\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.534758 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-log-httpd\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.535835 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-run-httpd\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.536033 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-log-httpd\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.536334 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-config-data\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.539293 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.542985 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.549073 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-config-data\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.554863 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8rt\" (UniqueName: \"kubernetes.io/projected/a206d099-424a-48ed-9c76-4539a74edcb4-kube-api-access-st8rt\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.558765 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-scripts\") pod \"ceilometer-0\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " pod="openstack/ceilometer-0" Dec 09 12:31:10 crc kubenswrapper[4970]: I1209 12:31:10.666793 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:11 crc kubenswrapper[4970]: I1209 12:31:11.179845 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:11 crc kubenswrapper[4970]: W1209 12:31:11.181476 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda206d099_424a_48ed_9c76_4539a74edcb4.slice/crio-53e3165e193434d05bdd8cd12993c15735e633d0af9092350281f68e23fbd7dc WatchSource:0}: Error finding container 53e3165e193434d05bdd8cd12993c15735e633d0af9092350281f68e23fbd7dc: Status 404 returned error can't find the container with id 53e3165e193434d05bdd8cd12993c15735e633d0af9092350281f68e23fbd7dc Dec 09 12:31:11 crc kubenswrapper[4970]: I1209 12:31:11.236929 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerStarted","Data":"53e3165e193434d05bdd8cd12993c15735e633d0af9092350281f68e23fbd7dc"} Dec 09 12:31:11 crc kubenswrapper[4970]: I1209 12:31:11.829701 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c6c852-5b17-4dac-b7de-22891a28d17a" path="/var/lib/kubelet/pods/88c6c852-5b17-4dac-b7de-22891a28d17a/volumes" Dec 09 12:31:13 crc kubenswrapper[4970]: I1209 12:31:13.557364 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 12:31:14 crc kubenswrapper[4970]: I1209 12:31:14.436070 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 09 12:31:14 crc kubenswrapper[4970]: I1209 12:31:14.880139 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:31:14 crc kubenswrapper[4970]: I1209 12:31:14.881457 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:31:15 crc kubenswrapper[4970]: I1209 12:31:15.284406 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerStarted","Data":"51d8a0d7180b2113acd62629a460248b7686967b256692a12ab4b8fc4da45b19"} Dec 09 12:31:15 crc kubenswrapper[4970]: I1209 12:31:15.284723 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerStarted","Data":"95b7e0696491eb8f6d1c15310341365732618e841127edf29d27fbc1a5392012"} Dec 09 12:31:15 crc kubenswrapper[4970]: I1209 12:31:15.286526 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjrlc" event={"ID":"48a3105b-e980-44c6-bb0d-a8db895867ee","Type":"ContainerStarted","Data":"a8a42395f9dfb4c52018236db650334d890dacc088fadb7d03e14fd76ca0a48b"} Dec 09 12:31:15 crc kubenswrapper[4970]: I1209 12:31:15.316999 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-mjrlc" podStartSLOduration=2.263080476 podStartE2EDuration="7.316976327s" podCreationTimestamp="2025-12-09 12:31:08 +0000 UTC" firstStartedPulling="2025-12-09 12:31:09.265874978 +0000 UTC m=+1481.826356419" lastFinishedPulling="2025-12-09 12:31:14.319771219 +0000 UTC m=+1486.880252270" observedRunningTime="2025-12-09 12:31:15.303396727 +0000 UTC m=+1487.863877778" watchObservedRunningTime="2025-12-09 12:31:15.316976327 +0000 UTC m=+1487.877457378" Dec 09 12:31:15 crc kubenswrapper[4970]: I1209 12:31:15.894522 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:15 crc kubenswrapper[4970]: I1209 12:31:15.894532 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:16 crc kubenswrapper[4970]: I1209 12:31:16.010820 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:31:16 crc kubenswrapper[4970]: I1209 12:31:16.010885 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:31:17 crc kubenswrapper[4970]: I1209 12:31:17.570292 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:17 crc kubenswrapper[4970]: I1209 12:31:17.571446 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:18 crc kubenswrapper[4970]: I1209 12:31:18.324289 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerStarted","Data":"b3c861b858654334bcaf151bffcecbfe8ceebbbbc419bac9598952af625b2e05"} Dec 09 12:31:18 crc kubenswrapper[4970]: I1209 12:31:18.557669 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 12:31:18 crc kubenswrapper[4970]: I1209 12:31:18.591270 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 12:31:18 crc kubenswrapper[4970]: I1209 12:31:18.653681 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:18 crc kubenswrapper[4970]: I1209 12:31:18.653755 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.241:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:19 crc kubenswrapper[4970]: I1209 12:31:19.338535 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerStarted","Data":"6411a3b6959ac24c8fd14fe9665fb0a55eb50b3172bf68a890e6fdce11579799"} Dec 09 12:31:19 crc kubenswrapper[4970]: I1209 12:31:19.339146 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:31:19 crc kubenswrapper[4970]: I1209 12:31:19.341996 4970 generic.go:334] "Generic (PLEG): container finished" podID="48a3105b-e980-44c6-bb0d-a8db895867ee" containerID="a8a42395f9dfb4c52018236db650334d890dacc088fadb7d03e14fd76ca0a48b" exitCode=0 Dec 09 12:31:19 crc kubenswrapper[4970]: I1209 12:31:19.342277 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjrlc" event={"ID":"48a3105b-e980-44c6-bb0d-a8db895867ee","Type":"ContainerDied","Data":"a8a42395f9dfb4c52018236db650334d890dacc088fadb7d03e14fd76ca0a48b"} Dec 09 12:31:19 crc kubenswrapper[4970]: I1209 12:31:19.365879 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.6542836410000001 podStartE2EDuration="9.365857073s" podCreationTimestamp="2025-12-09 12:31:10 +0000 UTC" firstStartedPulling="2025-12-09 12:31:11.184075051 +0000 UTC m=+1483.744556102" lastFinishedPulling="2025-12-09 12:31:18.895648483 +0000 UTC m=+1491.456129534" observedRunningTime="2025-12-09 12:31:19.358158169 +0000 UTC m=+1491.918639230" watchObservedRunningTime="2025-12-09 12:31:19.365857073 +0000 UTC m=+1491.926338124" Dec 09 12:31:19 crc kubenswrapper[4970]: I1209 12:31:19.381382 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.773332 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.895986 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-combined-ca-bundle\") pod \"48a3105b-e980-44c6-bb0d-a8db895867ee\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.896048 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-config-data\") pod \"48a3105b-e980-44c6-bb0d-a8db895867ee\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.896219 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-scripts\") pod \"48a3105b-e980-44c6-bb0d-a8db895867ee\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.896337 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmhq8\" (UniqueName: \"kubernetes.io/projected/48a3105b-e980-44c6-bb0d-a8db895867ee-kube-api-access-mmhq8\") pod \"48a3105b-e980-44c6-bb0d-a8db895867ee\" (UID: \"48a3105b-e980-44c6-bb0d-a8db895867ee\") " Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.902496 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a3105b-e980-44c6-bb0d-a8db895867ee-kube-api-access-mmhq8" (OuterVolumeSpecName: "kube-api-access-mmhq8") pod "48a3105b-e980-44c6-bb0d-a8db895867ee" (UID: "48a3105b-e980-44c6-bb0d-a8db895867ee"). InnerVolumeSpecName "kube-api-access-mmhq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.916455 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-scripts" (OuterVolumeSpecName: "scripts") pod "48a3105b-e980-44c6-bb0d-a8db895867ee" (UID: "48a3105b-e980-44c6-bb0d-a8db895867ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.936674 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-config-data" (OuterVolumeSpecName: "config-data") pod "48a3105b-e980-44c6-bb0d-a8db895867ee" (UID: "48a3105b-e980-44c6-bb0d-a8db895867ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.936789 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48a3105b-e980-44c6-bb0d-a8db895867ee" (UID: "48a3105b-e980-44c6-bb0d-a8db895867ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:20 crc kubenswrapper[4970]: I1209 12:31:20.999322 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4970]: I1209 12:31:21.000414 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4970]: I1209 12:31:21.000438 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/48a3105b-e980-44c6-bb0d-a8db895867ee-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4970]: I1209 12:31:21.000452 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmhq8\" (UniqueName: \"kubernetes.io/projected/48a3105b-e980-44c6-bb0d-a8db895867ee-kube-api-access-mmhq8\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:21 crc kubenswrapper[4970]: I1209 12:31:21.369005 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mjrlc" event={"ID":"48a3105b-e980-44c6-bb0d-a8db895867ee","Type":"ContainerDied","Data":"64fda555ba0db9eafd900f7a2f18cd43e5e245c4fd3d11f2334b3aec84e7f7b3"} Dec 09 12:31:21 crc kubenswrapper[4970]: I1209 12:31:21.369035 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mjrlc" Dec 09 12:31:21 crc kubenswrapper[4970]: I1209 12:31:21.369070 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fda555ba0db9eafd900f7a2f18cd43e5e245c4fd3d11f2334b3aec84e7f7b3" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.854556 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 09 12:31:22 crc kubenswrapper[4970]: E1209 12:31:22.855487 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a3105b-e980-44c6-bb0d-a8db895867ee" containerName="aodh-db-sync" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.855505 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a3105b-e980-44c6-bb0d-a8db895867ee" containerName="aodh-db-sync" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.855753 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a3105b-e980-44c6-bb0d-a8db895867ee" containerName="aodh-db-sync" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.857805 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.860061 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.860202 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.861936 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-99pll" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.869818 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.948385 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-scripts\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.948427 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-config-data\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.948528 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:22 crc kubenswrapper[4970]: I1209 12:31:22.948756 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4vd\" (UniqueName: \"kubernetes.io/projected/01a42978-e19d-4fce-8974-de4926ff5ab8-kube-api-access-dx4vd\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.052846 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4vd\" (UniqueName: \"kubernetes.io/projected/01a42978-e19d-4fce-8974-de4926ff5ab8-kube-api-access-dx4vd\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.053024 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-scripts\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.053059 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-config-data\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.053148 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.064778 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-config-data\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.073993 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-scripts\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.077025 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-combined-ca-bundle\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.097959 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4vd\" (UniqueName: \"kubernetes.io/projected/01a42978-e19d-4fce-8974-de4926ff5ab8-kube-api-access-dx4vd\") pod \"aodh-0\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.182183 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 12:31:23 crc kubenswrapper[4970]: I1209 12:31:23.791614 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 12:31:23 crc kubenswrapper[4970]: W1209 12:31:23.796702 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a42978_e19d_4fce_8974_de4926ff5ab8.slice/crio-ea78cee7fbdd5bc6a0afdc2f85f3bd850f23b6900dff41183695a62e8afb2c08 WatchSource:0}: Error finding container ea78cee7fbdd5bc6a0afdc2f85f3bd850f23b6900dff41183695a62e8afb2c08: Status 404 returned error can't find the container with id ea78cee7fbdd5bc6a0afdc2f85f3bd850f23b6900dff41183695a62e8afb2c08 Dec 09 12:31:24 crc kubenswrapper[4970]: I1209 12:31:24.419299 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerStarted","Data":"ea78cee7fbdd5bc6a0afdc2f85f3bd850f23b6900dff41183695a62e8afb2c08"} Dec 09 12:31:24 crc kubenswrapper[4970]: I1209 12:31:24.904897 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:31:24 crc kubenswrapper[4970]: I1209 12:31:24.905226 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:31:24 crc kubenswrapper[4970]: I1209 12:31:24.920010 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:31:24 crc kubenswrapper[4970]: I1209 12:31:24.935262 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.368634 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.436037 4970 generic.go:334] "Generic (PLEG): container finished" podID="dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" containerID="54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6" exitCode=137 Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.436162 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.436162 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a","Type":"ContainerDied","Data":"54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6"} Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.436578 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a","Type":"ContainerDied","Data":"99cc1847307ac435880c4164ae53cfb8134561d541f88e3cc4f0852ec341e48f"} Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.436599 4970 scope.go:117] "RemoveContainer" containerID="54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.444810 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerStarted","Data":"f69d64e0f8ac25f484f61971f1815ddb90fb341b9488ee1ef17095d4c60b39c6"} Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.444851 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-combined-ca-bundle\") pod \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.445014 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-config-data\") pod \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.445063 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t22dz\" (UniqueName: \"kubernetes.io/projected/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-kube-api-access-t22dz\") pod \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\" (UID: \"dc32ae23-f0a5-43d2-a1e0-edca8c7b617a\") " Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.451207 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-kube-api-access-t22dz" (OuterVolumeSpecName: "kube-api-access-t22dz") pod "dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" (UID: "dc32ae23-f0a5-43d2-a1e0-edca8c7b617a"). InnerVolumeSpecName "kube-api-access-t22dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.473386 4970 scope.go:117] "RemoveContainer" containerID="54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6" Dec 09 12:31:25 crc kubenswrapper[4970]: E1209 12:31:25.476627 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6\": container with ID starting with 54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6 not found: ID does not exist" containerID="54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.476674 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6"} err="failed to get container status \"54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6\": rpc error: code = NotFound desc = could not find container \"54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6\": container with ID starting with 54fa7cae3d0996f13a18ac97806e0d31dbb365daa46fc5f08d1edda08fe0f2d6 not found: ID does not exist" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.490855 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-config-data" (OuterVolumeSpecName: "config-data") pod "dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" (UID: "dc32ae23-f0a5-43d2-a1e0-edca8c7b617a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.504559 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" (UID: "dc32ae23-f0a5-43d2-a1e0-edca8c7b617a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.548056 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.548092 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.548102 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t22dz\" (UniqueName: \"kubernetes.io/projected/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a-kube-api-access-t22dz\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.779853 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.796944 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.811066 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:25 crc kubenswrapper[4970]: E1209 12:31:25.811577 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.811594 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.811816 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.814718 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.819075 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.819322 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.819496 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.834500 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc32ae23-f0a5-43d2-a1e0-edca8c7b617a" path="/var/lib/kubelet/pods/dc32ae23-f0a5-43d2-a1e0-edca8c7b617a/volumes" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.835117 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.835387 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-central-agent" containerID="cri-o://95b7e0696491eb8f6d1c15310341365732618e841127edf29d27fbc1a5392012" gracePeriod=30 Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.835497 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-notification-agent" containerID="cri-o://51d8a0d7180b2113acd62629a460248b7686967b256692a12ab4b8fc4da45b19" gracePeriod=30 Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.835503 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="sg-core" containerID="cri-o://b3c861b858654334bcaf151bffcecbfe8ceebbbbc419bac9598952af625b2e05" gracePeriod=30 Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.835622 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="proxy-httpd" containerID="cri-o://6411a3b6959ac24c8fd14fe9665fb0a55eb50b3172bf68a890e6fdce11579799" gracePeriod=30 Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.861730 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.864648 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.864726 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.864881 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.865944 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.866052 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfjsg\" (UniqueName: \"kubernetes.io/projected/3b94a74d-8219-4cba-b0a1-511a0086c0ad-kube-api-access-kfjsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.971150 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.971500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.971564 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.971659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.971711 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfjsg\" (UniqueName: \"kubernetes.io/projected/3b94a74d-8219-4cba-b0a1-511a0086c0ad-kube-api-access-kfjsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.977485 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.979331 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.984765 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.994642 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfjsg\" (UniqueName: \"kubernetes.io/projected/3b94a74d-8219-4cba-b0a1-511a0086c0ad-kube-api-access-kfjsg\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:25 crc kubenswrapper[4970]: I1209 12:31:25.999676 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b94a74d-8219-4cba-b0a1-511a0086c0ad-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3b94a74d-8219-4cba-b0a1-511a0086c0ad\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.159960 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.422970 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.483989 4970 generic.go:334] "Generic (PLEG): container finished" podID="a206d099-424a-48ed-9c76-4539a74edcb4" containerID="6411a3b6959ac24c8fd14fe9665fb0a55eb50b3172bf68a890e6fdce11579799" exitCode=0 Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.484027 4970 generic.go:334] "Generic (PLEG): container finished" podID="a206d099-424a-48ed-9c76-4539a74edcb4" containerID="b3c861b858654334bcaf151bffcecbfe8ceebbbbc419bac9598952af625b2e05" exitCode=2 Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.484039 4970 generic.go:334] "Generic (PLEG): container finished" podID="a206d099-424a-48ed-9c76-4539a74edcb4" containerID="95b7e0696491eb8f6d1c15310341365732618e841127edf29d27fbc1a5392012" exitCode=0 Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.484090 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerDied","Data":"6411a3b6959ac24c8fd14fe9665fb0a55eb50b3172bf68a890e6fdce11579799"} Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.484124 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerDied","Data":"b3c861b858654334bcaf151bffcecbfe8ceebbbbc419bac9598952af625b2e05"} Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.484138 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerDied","Data":"95b7e0696491eb8f6d1c15310341365732618e841127edf29d27fbc1a5392012"} Dec 09 12:31:26 crc kubenswrapper[4970]: W1209 12:31:26.742596 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b94a74d_8219_4cba_b0a1_511a0086c0ad.slice/crio-2aa4b6fb2e7481cafc85cb7d409de661e9484c0646530b0e304c2bf7419fb484 WatchSource:0}: Error finding container 2aa4b6fb2e7481cafc85cb7d409de661e9484c0646530b0e304c2bf7419fb484: Status 404 returned error can't find the container with id 2aa4b6fb2e7481cafc85cb7d409de661e9484c0646530b0e304c2bf7419fb484 Dec 09 12:31:26 crc kubenswrapper[4970]: I1209 12:31:26.747825 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.505034 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b94a74d-8219-4cba-b0a1-511a0086c0ad","Type":"ContainerStarted","Data":"80aa286505d706adde030d9323a06bd311e74a2f25d8e2621815f6d9ce7845c8"} Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.505412 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3b94a74d-8219-4cba-b0a1-511a0086c0ad","Type":"ContainerStarted","Data":"2aa4b6fb2e7481cafc85cb7d409de661e9484c0646530b0e304c2bf7419fb484"} Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.509092 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerStarted","Data":"7da0be7a31fd8c3bd419409236c312478c6b5b666f3b92607ba433fcd9387031"} Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.539670 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.5396288030000003 podStartE2EDuration="2.539628803s" podCreationTimestamp="2025-12-09 12:31:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:27.528743664 +0000 UTC m=+1500.089224715" watchObservedRunningTime="2025-12-09 12:31:27.539628803 +0000 UTC m=+1500.100109854" Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.574974 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.575665 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.575763 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:31:27 crc kubenswrapper[4970]: I1209 12:31:27.584749 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.528677 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerStarted","Data":"5e107555cea9bc62b23e22e0bc94c2ab2f627eb1a42578a265c133842c49171d"} Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.535543 4970 generic.go:334] "Generic (PLEG): container finished" podID="a206d099-424a-48ed-9c76-4539a74edcb4" containerID="51d8a0d7180b2113acd62629a460248b7686967b256692a12ab4b8fc4da45b19" exitCode=0 Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.535730 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerDied","Data":"51d8a0d7180b2113acd62629a460248b7686967b256692a12ab4b8fc4da45b19"} Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.536310 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.544675 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.776099 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq"] Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.785907 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.829913 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq"] Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.831322 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.885695 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-sg-core-conf-yaml\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.885811 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-config-data\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.885950 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-scripts\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886007 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-run-httpd\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886082 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-combined-ca-bundle\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886173 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st8rt\" (UniqueName: \"kubernetes.io/projected/a206d099-424a-48ed-9c76-4539a74edcb4-kube-api-access-st8rt\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886236 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-log-httpd\") pod \"a206d099-424a-48ed-9c76-4539a74edcb4\" (UID: \"a206d099-424a-48ed-9c76-4539a74edcb4\") " Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886607 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886806 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-config\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886856 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886939 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.886994 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cc46\" (UniqueName: \"kubernetes.io/projected/b9b69347-ce23-455a-9ede-a31b66193240-kube-api-access-5cc46\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.887076 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.888222 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.894821 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.935502 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a206d099-424a-48ed-9c76-4539a74edcb4-kube-api-access-st8rt" (OuterVolumeSpecName: "kube-api-access-st8rt") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "kube-api-access-st8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.950420 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-scripts" (OuterVolumeSpecName: "scripts") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989430 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989590 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-config\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989624 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989669 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989707 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cc46\" (UniqueName: \"kubernetes.io/projected/b9b69347-ce23-455a-9ede-a31b66193240-kube-api-access-5cc46\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989752 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989833 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st8rt\" (UniqueName: \"kubernetes.io/projected/a206d099-424a-48ed-9c76-4539a74edcb4-kube-api-access-st8rt\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989847 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.989856 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.990279 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.990785 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a206d099-424a-48ed-9c76-4539a74edcb4-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:28 crc kubenswrapper[4970]: I1209 12:31:28.991291 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:28.991649 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:28.992287 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:28.992619 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-config\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.004974 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.020054 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cc46\" (UniqueName: \"kubernetes.io/projected/b9b69347-ce23-455a-9ede-a31b66193240-kube-api-access-5cc46\") pod \"dnsmasq-dns-6b7bbf7cf9-z6lgq\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.069583 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.092811 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.092839 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.127767 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-config-data" (OuterVolumeSpecName: "config-data") pod "a206d099-424a-48ed-9c76-4539a74edcb4" (UID: "a206d099-424a-48ed-9c76-4539a74edcb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.150940 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.196515 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a206d099-424a-48ed-9c76-4539a74edcb4-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.560237 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.564075 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a206d099-424a-48ed-9c76-4539a74edcb4","Type":"ContainerDied","Data":"53e3165e193434d05bdd8cd12993c15735e633d0af9092350281f68e23fbd7dc"} Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.564165 4970 scope.go:117] "RemoveContainer" containerID="6411a3b6959ac24c8fd14fe9665fb0a55eb50b3172bf68a890e6fdce11579799" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.712594 4970 scope.go:117] "RemoveContainer" containerID="b3c861b858654334bcaf151bffcecbfe8ceebbbbc419bac9598952af625b2e05" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.723512 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.739231 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.755385 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:29 crc kubenswrapper[4970]: E1209 12:31:29.756223 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="proxy-httpd" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756265 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="proxy-httpd" Dec 09 12:31:29 crc kubenswrapper[4970]: E1209 12:31:29.756298 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-notification-agent" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756305 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-notification-agent" Dec 09 12:31:29 crc kubenswrapper[4970]: E1209 12:31:29.756315 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="sg-core" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756323 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="sg-core" Dec 09 12:31:29 crc kubenswrapper[4970]: E1209 12:31:29.756340 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-central-agent" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756346 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-central-agent" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756585 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="proxy-httpd" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756604 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="sg-core" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756619 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-central-agent" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.756637 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" containerName="ceilometer-notification-agent" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.758520 4970 scope.go:117] "RemoveContainer" containerID="51d8a0d7180b2113acd62629a460248b7686967b256692a12ab4b8fc4da45b19" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.761352 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.764499 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.764739 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.787734 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.827974 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-run-httpd\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.828022 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.828083 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45rdn\" (UniqueName: \"kubernetes.io/projected/7e7aef90-339c-401e-bd95-23cbe44d09b0-kube-api-access-45rdn\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.828190 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.828237 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-scripts\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.828456 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-config-data\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.828518 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-log-httpd\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.852412 4970 scope.go:117] "RemoveContainer" containerID="95b7e0696491eb8f6d1c15310341365732618e841127edf29d27fbc1a5392012" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.862706 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a206d099-424a-48ed-9c76-4539a74edcb4" path="/var/lib/kubelet/pods/a206d099-424a-48ed-9c76-4539a74edcb4/volumes" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.873412 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq"] Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.942783 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-scripts\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.943238 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-config-data\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.943309 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-log-httpd\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.943378 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-run-httpd\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.943417 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.943519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45rdn\" (UniqueName: \"kubernetes.io/projected/7e7aef90-339c-401e-bd95-23cbe44d09b0-kube-api-access-45rdn\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.943739 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.944882 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-run-httpd\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.956894 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-log-httpd\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.957168 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.958580 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.959729 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-scripts\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:29 crc kubenswrapper[4970]: I1209 12:31:29.974535 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-config-data\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.013707 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45rdn\" (UniqueName: \"kubernetes.io/projected/7e7aef90-339c-401e-bd95-23cbe44d09b0-kube-api-access-45rdn\") pod \"ceilometer-0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " pod="openstack/ceilometer-0" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.117373 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.204236 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zglb2"] Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.207202 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.228497 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zglb2"] Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.359701 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45bz\" (UniqueName: \"kubernetes.io/projected/c48899b9-165a-4054-a1eb-47b69a0fc3c2-kube-api-access-c45bz\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.359918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-utilities\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.360011 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-catalog-content\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.462047 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-utilities\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.462156 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-catalog-content\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.462204 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c45bz\" (UniqueName: \"kubernetes.io/projected/c48899b9-165a-4054-a1eb-47b69a0fc3c2-kube-api-access-c45bz\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.463034 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-utilities\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.463266 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-catalog-content\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.480194 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c45bz\" (UniqueName: \"kubernetes.io/projected/c48899b9-165a-4054-a1eb-47b69a0fc3c2-kube-api-access-c45bz\") pod \"redhat-marketplace-zglb2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.533797 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:30 crc kubenswrapper[4970]: I1209 12:31:30.606324 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" event={"ID":"b9b69347-ce23-455a-9ede-a31b66193240","Type":"ContainerStarted","Data":"18107f7693d9da8ea51fbf90f26f1085eeb741d63b1376c9df52549d0ca34eff"} Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.160752 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.623781 4970 generic.go:334] "Generic (PLEG): container finished" podID="b9b69347-ce23-455a-9ede-a31b66193240" containerID="709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c" exitCode=0 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.624461 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" event={"ID":"b9b69347-ce23-455a-9ede-a31b66193240","Type":"ContainerDied","Data":"709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c"} Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.629242 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-api" containerID="cri-o://f69d64e0f8ac25f484f61971f1815ddb90fb341b9488ee1ef17095d4c60b39c6" gracePeriod=30 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.629349 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-listener" containerID="cri-o://c3dd4cafe8ded6821605100415bbd907372bebcb4dc387d255699781ad7ca0b2" gracePeriod=30 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.629385 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-notifier" containerID="cri-o://5e107555cea9bc62b23e22e0bc94c2ab2f627eb1a42578a265c133842c49171d" gracePeriod=30 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.629422 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-evaluator" containerID="cri-o://7da0be7a31fd8c3bd419409236c312478c6b5b666f3b92607ba433fcd9387031" gracePeriod=30 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.728540 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.286972259 podStartE2EDuration="9.728495463s" podCreationTimestamp="2025-12-09 12:31:22 +0000 UTC" firstStartedPulling="2025-12-09 12:31:23.80055199 +0000 UTC m=+1496.361033041" lastFinishedPulling="2025-12-09 12:31:31.242075194 +0000 UTC m=+1503.802556245" observedRunningTime="2025-12-09 12:31:31.703706596 +0000 UTC m=+1504.264187647" watchObservedRunningTime="2025-12-09 12:31:31.728495463 +0000 UTC m=+1504.288976514" Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.874480 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.874718 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-log" containerID="cri-o://f82b3e3136d4d90f4ceef31c2446ce0602d64cf579c57156d462b8af46455c7b" gracePeriod=30 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.875153 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-api" containerID="cri-o://7ed6a21643ee12e2ee43085af9f08ab9be4b29a9964b0cb29051510417b464c1" gracePeriod=30 Dec 09 12:31:31 crc kubenswrapper[4970]: I1209 12:31:31.887201 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zglb2"] Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.049381 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.702965 4970 generic.go:334] "Generic (PLEG): container finished" podID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerID="f82b3e3136d4d90f4ceef31c2446ce0602d64cf579c57156d462b8af46455c7b" exitCode=143 Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.703050 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"141ca3a5-7aad-4ab7-b4b7-613a531f696f","Type":"ContainerDied","Data":"f82b3e3136d4d90f4ceef31c2446ce0602d64cf579c57156d462b8af46455c7b"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707698 4970 generic.go:334] "Generic (PLEG): container finished" podID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerID="5e107555cea9bc62b23e22e0bc94c2ab2f627eb1a42578a265c133842c49171d" exitCode=0 Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707731 4970 generic.go:334] "Generic (PLEG): container finished" podID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerID="7da0be7a31fd8c3bd419409236c312478c6b5b666f3b92607ba433fcd9387031" exitCode=0 Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707742 4970 generic.go:334] "Generic (PLEG): container finished" podID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerID="f69d64e0f8ac25f484f61971f1815ddb90fb341b9488ee1ef17095d4c60b39c6" exitCode=0 Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707786 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerStarted","Data":"c3dd4cafe8ded6821605100415bbd907372bebcb4dc387d255699781ad7ca0b2"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707810 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerDied","Data":"5e107555cea9bc62b23e22e0bc94c2ab2f627eb1a42578a265c133842c49171d"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707819 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerDied","Data":"7da0be7a31fd8c3bd419409236c312478c6b5b666f3b92607ba433fcd9387031"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.707827 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerDied","Data":"f69d64e0f8ac25f484f61971f1815ddb90fb341b9488ee1ef17095d4c60b39c6"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.712187 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerStarted","Data":"596de810f5ea01a4a798864d45741fabf3ab293fec13546d661602c5b9f3118b"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.718343 4970 generic.go:334] "Generic (PLEG): container finished" podID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerID="5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af" exitCode=0 Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.718422 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zglb2" event={"ID":"c48899b9-165a-4054-a1eb-47b69a0fc3c2","Type":"ContainerDied","Data":"5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.718450 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zglb2" event={"ID":"c48899b9-165a-4054-a1eb-47b69a0fc3c2","Type":"ContainerStarted","Data":"a33801614c9e6086c5918d4b4eeb8ee481cc5c559b48a67f902b2acdddf73470"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.721585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" event={"ID":"b9b69347-ce23-455a-9ede-a31b66193240","Type":"ContainerStarted","Data":"fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1"} Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.722833 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.738375 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:32 crc kubenswrapper[4970]: I1209 12:31:32.775033 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" podStartSLOduration=4.77501296 podStartE2EDuration="4.77501296s" podCreationTimestamp="2025-12-09 12:31:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:32.766094534 +0000 UTC m=+1505.326575585" watchObservedRunningTime="2025-12-09 12:31:32.77501296 +0000 UTC m=+1505.335494011" Dec 09 12:31:33 crc kubenswrapper[4970]: I1209 12:31:33.738232 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerStarted","Data":"298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024"} Dec 09 12:31:34 crc kubenswrapper[4970]: I1209 12:31:34.752593 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerStarted","Data":"7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38"} Dec 09 12:31:34 crc kubenswrapper[4970]: I1209 12:31:34.752904 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerStarted","Data":"54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb"} Dec 09 12:31:34 crc kubenswrapper[4970]: I1209 12:31:34.759341 4970 generic.go:334] "Generic (PLEG): container finished" podID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerID="6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3" exitCode=0 Dec 09 12:31:34 crc kubenswrapper[4970]: I1209 12:31:34.759399 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zglb2" event={"ID":"c48899b9-165a-4054-a1eb-47b69a0fc3c2","Type":"ContainerDied","Data":"6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3"} Dec 09 12:31:35 crc kubenswrapper[4970]: I1209 12:31:35.780847 4970 generic.go:334] "Generic (PLEG): container finished" podID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerID="7ed6a21643ee12e2ee43085af9f08ab9be4b29a9964b0cb29051510417b464c1" exitCode=0 Dec 09 12:31:35 crc kubenswrapper[4970]: I1209 12:31:35.780897 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"141ca3a5-7aad-4ab7-b4b7-613a531f696f","Type":"ContainerDied","Data":"7ed6a21643ee12e2ee43085af9f08ab9be4b29a9964b0cb29051510417b464c1"} Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.161061 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.208701 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.419084 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.562083 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzlwm\" (UniqueName: \"kubernetes.io/projected/141ca3a5-7aad-4ab7-b4b7-613a531f696f-kube-api-access-zzlwm\") pod \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.562643 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-combined-ca-bundle\") pod \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.562843 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-config-data\") pod \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.562932 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141ca3a5-7aad-4ab7-b4b7-613a531f696f-logs\") pod \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\" (UID: \"141ca3a5-7aad-4ab7-b4b7-613a531f696f\") " Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.563878 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141ca3a5-7aad-4ab7-b4b7-613a531f696f-logs" (OuterVolumeSpecName: "logs") pod "141ca3a5-7aad-4ab7-b4b7-613a531f696f" (UID: "141ca3a5-7aad-4ab7-b4b7-613a531f696f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.568440 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141ca3a5-7aad-4ab7-b4b7-613a531f696f-kube-api-access-zzlwm" (OuterVolumeSpecName: "kube-api-access-zzlwm") pod "141ca3a5-7aad-4ab7-b4b7-613a531f696f" (UID: "141ca3a5-7aad-4ab7-b4b7-613a531f696f"). InnerVolumeSpecName "kube-api-access-zzlwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.613344 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-config-data" (OuterVolumeSpecName: "config-data") pod "141ca3a5-7aad-4ab7-b4b7-613a531f696f" (UID: "141ca3a5-7aad-4ab7-b4b7-613a531f696f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.614232 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "141ca3a5-7aad-4ab7-b4b7-613a531f696f" (UID: "141ca3a5-7aad-4ab7-b4b7-613a531f696f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.665510 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.665547 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141ca3a5-7aad-4ab7-b4b7-613a531f696f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.665558 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141ca3a5-7aad-4ab7-b4b7-613a531f696f-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.665566 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzlwm\" (UniqueName: \"kubernetes.io/projected/141ca3a5-7aad-4ab7-b4b7-613a531f696f-kube-api-access-zzlwm\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.796079 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerStarted","Data":"27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3"} Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.796215 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-central-agent" containerID="cri-o://298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024" gracePeriod=30 Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.796264 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="sg-core" containerID="cri-o://7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38" gracePeriod=30 Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.796277 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="proxy-httpd" containerID="cri-o://27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3" gracePeriod=30 Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.796283 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-notification-agent" containerID="cri-o://54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb" gracePeriod=30 Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.796517 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.804754 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zglb2" event={"ID":"c48899b9-165a-4054-a1eb-47b69a0fc3c2","Type":"ContainerStarted","Data":"7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d"} Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.809424 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.809707 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"141ca3a5-7aad-4ab7-b4b7-613a531f696f","Type":"ContainerDied","Data":"add0332b0e42f4fbc0c43d9f0fcac2fe3fd987f47f78e6ced19588bd7f91e086"} Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.809755 4970 scope.go:117] "RemoveContainer" containerID="7ed6a21643ee12e2ee43085af9f08ab9be4b29a9964b0cb29051510417b464c1" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.837953 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.759357645 podStartE2EDuration="7.837929199s" podCreationTimestamp="2025-12-09 12:31:29 +0000 UTC" firstStartedPulling="2025-12-09 12:31:32.072111034 +0000 UTC m=+1504.632592085" lastFinishedPulling="2025-12-09 12:31:36.150682588 +0000 UTC m=+1508.711163639" observedRunningTime="2025-12-09 12:31:36.822713445 +0000 UTC m=+1509.383194496" watchObservedRunningTime="2025-12-09 12:31:36.837929199 +0000 UTC m=+1509.398410250" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.844459 4970 scope.go:117] "RemoveContainer" containerID="f82b3e3136d4d90f4ceef31c2446ce0602d64cf579c57156d462b8af46455c7b" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.858621 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.862114 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zglb2" podStartSLOduration=3.607431114 podStartE2EDuration="6.86209203s" podCreationTimestamp="2025-12-09 12:31:30 +0000 UTC" firstStartedPulling="2025-12-09 12:31:32.721156971 +0000 UTC m=+1505.281638022" lastFinishedPulling="2025-12-09 12:31:35.975817897 +0000 UTC m=+1508.536298938" observedRunningTime="2025-12-09 12:31:36.848611273 +0000 UTC m=+1509.409092324" watchObservedRunningTime="2025-12-09 12:31:36.86209203 +0000 UTC m=+1509.422573091" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.908086 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.953176 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.953462 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:36 crc kubenswrapper[4970]: E1209 12:31:36.954317 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-api" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.954353 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-api" Dec 09 12:31:36 crc kubenswrapper[4970]: E1209 12:31:36.954381 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-log" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.954390 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-log" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.954761 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-log" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.954784 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" containerName="nova-api-api" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.956463 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.975295 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.975493 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.975601 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:31:36 crc kubenswrapper[4970]: I1209 12:31:36.982199 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.077372 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.077531 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f32da17a-39b3-4e8e-82f5-d786aeb16266-logs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.077548 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.077613 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-config-data\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.077705 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-public-tls-certs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.077760 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2k7t\" (UniqueName: \"kubernetes.io/projected/f32da17a-39b3-4e8e-82f5-d786aeb16266-kube-api-access-w2k7t\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.150353 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-jxqtz"] Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.151836 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.154204 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.159109 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.177848 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jxqtz"] Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179279 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f32da17a-39b3-4e8e-82f5-d786aeb16266-logs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179307 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179350 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-config-data\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179400 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-public-tls-certs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179437 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2k7t\" (UniqueName: \"kubernetes.io/projected/f32da17a-39b3-4e8e-82f5-d786aeb16266-kube-api-access-w2k7t\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179503 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.179966 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f32da17a-39b3-4e8e-82f5-d786aeb16266-logs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.183952 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.184152 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-config-data\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.184928 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-public-tls-certs\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.186136 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.202484 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2k7t\" (UniqueName: \"kubernetes.io/projected/f32da17a-39b3-4e8e-82f5-d786aeb16266-kube-api-access-w2k7t\") pod \"nova-api-0\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.281684 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.281740 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2fnr\" (UniqueName: \"kubernetes.io/projected/68af4d13-2a15-420f-84b9-a0ebec93ac59-kube-api-access-s2fnr\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.281816 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-scripts\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.281841 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-config-data\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.305825 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.384184 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.384235 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2fnr\" (UniqueName: \"kubernetes.io/projected/68af4d13-2a15-420f-84b9-a0ebec93ac59-kube-api-access-s2fnr\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.384358 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-scripts\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.384382 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-config-data\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.390815 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.393536 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-scripts\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.397849 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-config-data\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.403365 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2fnr\" (UniqueName: \"kubernetes.io/projected/68af4d13-2a15-420f-84b9-a0ebec93ac59-kube-api-access-s2fnr\") pod \"nova-cell1-cell-mapping-jxqtz\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.475072 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.831139 4970 generic.go:334] "Generic (PLEG): container finished" podID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerID="27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3" exitCode=0 Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.831413 4970 generic.go:334] "Generic (PLEG): container finished" podID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerID="7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38" exitCode=2 Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.831423 4970 generic.go:334] "Generic (PLEG): container finished" podID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerID="54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb" exitCode=0 Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.832056 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141ca3a5-7aad-4ab7-b4b7-613a531f696f" path="/var/lib/kubelet/pods/141ca3a5-7aad-4ab7-b4b7-613a531f696f/volumes" Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.832820 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerDied","Data":"27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3"} Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.832858 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerDied","Data":"7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38"} Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.832875 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerDied","Data":"54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb"} Dec 09 12:31:37 crc kubenswrapper[4970]: I1209 12:31:37.883290 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:37 crc kubenswrapper[4970]: W1209 12:31:37.888155 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf32da17a_39b3_4e8e_82f5_d786aeb16266.slice/crio-f1db694c63ce996be9f64ba24d1b13d4758d75abeacfb19b1b210f5beb78aedc WatchSource:0}: Error finding container f1db694c63ce996be9f64ba24d1b13d4758d75abeacfb19b1b210f5beb78aedc: Status 404 returned error can't find the container with id f1db694c63ce996be9f64ba24d1b13d4758d75abeacfb19b1b210f5beb78aedc Dec 09 12:31:38 crc kubenswrapper[4970]: W1209 12:31:38.015434 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68af4d13_2a15_420f_84b9_a0ebec93ac59.slice/crio-a41a7199e70c66597ad3965a53c28714e69e4bc807b7c2b8fe1a828fc965b795 WatchSource:0}: Error finding container a41a7199e70c66597ad3965a53c28714e69e4bc807b7c2b8fe1a828fc965b795: Status 404 returned error can't find the container with id a41a7199e70c66597ad3965a53c28714e69e4bc807b7c2b8fe1a828fc965b795 Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.021895 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-jxqtz"] Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.845454 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f32da17a-39b3-4e8e-82f5-d786aeb16266","Type":"ContainerStarted","Data":"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6"} Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.846311 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f32da17a-39b3-4e8e-82f5-d786aeb16266","Type":"ContainerStarted","Data":"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f"} Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.846330 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f32da17a-39b3-4e8e-82f5-d786aeb16266","Type":"ContainerStarted","Data":"f1db694c63ce996be9f64ba24d1b13d4758d75abeacfb19b1b210f5beb78aedc"} Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.847390 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jxqtz" event={"ID":"68af4d13-2a15-420f-84b9-a0ebec93ac59","Type":"ContainerStarted","Data":"05eb626e45b3b8c8b61958e26d5c4aa8693cf6bb7833af2a2e7aa2fb9fc72b29"} Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.847412 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jxqtz" event={"ID":"68af4d13-2a15-420f-84b9-a0ebec93ac59","Type":"ContainerStarted","Data":"a41a7199e70c66597ad3965a53c28714e69e4bc807b7c2b8fe1a828fc965b795"} Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.886279 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.886237166 podStartE2EDuration="2.886237166s" podCreationTimestamp="2025-12-09 12:31:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:38.871499384 +0000 UTC m=+1511.431980435" watchObservedRunningTime="2025-12-09 12:31:38.886237166 +0000 UTC m=+1511.446718217" Dec 09 12:31:38 crc kubenswrapper[4970]: I1209 12:31:38.906232 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-jxqtz" podStartSLOduration=1.906206966 podStartE2EDuration="1.906206966s" podCreationTimestamp="2025-12-09 12:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:38.890679334 +0000 UTC m=+1511.451160395" watchObservedRunningTime="2025-12-09 12:31:38.906206966 +0000 UTC m=+1511.466688037" Dec 09 12:31:39 crc kubenswrapper[4970]: I1209 12:31:39.153304 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:31:39 crc kubenswrapper[4970]: I1209 12:31:39.226440 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rg4sn"] Dec 09 12:31:39 crc kubenswrapper[4970]: I1209 12:31:39.226674 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" containerName="dnsmasq-dns" containerID="cri-o://cc5397017d026aa15653bab3c0a6d6dfe3d0f4f721a579d13634247136198f1e" gracePeriod=10 Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.073866 4970 generic.go:334] "Generic (PLEG): container finished" podID="d531e31f-a903-40b8-b91f-0579c272cb87" containerID="cc5397017d026aa15653bab3c0a6d6dfe3d0f4f721a579d13634247136198f1e" exitCode=0 Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.137244 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" event={"ID":"d531e31f-a903-40b8-b91f-0579c272cb87","Type":"ContainerDied","Data":"cc5397017d026aa15653bab3c0a6d6dfe3d0f4f721a579d13634247136198f1e"} Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.363195 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.538362 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.538414 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.565333 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-sb\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.565622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-svc\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.565651 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-nb\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.565686 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lfq9\" (UniqueName: \"kubernetes.io/projected/d531e31f-a903-40b8-b91f-0579c272cb87-kube-api-access-5lfq9\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.565800 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-swift-storage-0\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.565873 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.586343 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d531e31f-a903-40b8-b91f-0579c272cb87-kube-api-access-5lfq9" (OuterVolumeSpecName: "kube-api-access-5lfq9") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "kube-api-access-5lfq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.595949 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.640240 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.650095 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.659866 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.667123 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config" (OuterVolumeSpecName: "config") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.667236 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config\") pod \"d531e31f-a903-40b8-b91f-0579c272cb87\" (UID: \"d531e31f-a903-40b8-b91f-0579c272cb87\") " Dec 09 12:31:40 crc kubenswrapper[4970]: W1209 12:31:40.667346 4970 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d531e31f-a903-40b8-b91f-0579c272cb87/volumes/kubernetes.io~configmap/config Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.667368 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config" (OuterVolumeSpecName: "config") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.668130 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.668151 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.668161 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.668170 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5lfq9\" (UniqueName: \"kubernetes.io/projected/d531e31f-a903-40b8-b91f-0579c272cb87-kube-api-access-5lfq9\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.668180 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.670802 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d531e31f-a903-40b8-b91f-0579c272cb87" (UID: "d531e31f-a903-40b8-b91f-0579c272cb87"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.770307 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d531e31f-a903-40b8-b91f-0579c272cb87-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.798142 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.973080 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-run-httpd\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.973652 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.973820 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-combined-ca-bundle\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.974191 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-config-data\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.974405 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45rdn\" (UniqueName: \"kubernetes.io/projected/7e7aef90-339c-401e-bd95-23cbe44d09b0-kube-api-access-45rdn\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.974506 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-scripts\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.974698 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-sg-core-conf-yaml\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.974732 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-log-httpd\") pod \"7e7aef90-339c-401e-bd95-23cbe44d09b0\" (UID: \"7e7aef90-339c-401e-bd95-23cbe44d09b0\") " Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.975136 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.975324 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.975345 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7e7aef90-339c-401e-bd95-23cbe44d09b0-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.979799 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e7aef90-339c-401e-bd95-23cbe44d09b0-kube-api-access-45rdn" (OuterVolumeSpecName: "kube-api-access-45rdn") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "kube-api-access-45rdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:40 crc kubenswrapper[4970]: I1209 12:31:40.981454 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-scripts" (OuterVolumeSpecName: "scripts") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.009623 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.074192 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.076838 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.076867 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45rdn\" (UniqueName: \"kubernetes.io/projected/7e7aef90-339c-401e-bd95-23cbe44d09b0-kube-api-access-45rdn\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.076882 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.076895 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.088398 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.088395 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-rg4sn" event={"ID":"d531e31f-a903-40b8-b91f-0579c272cb87","Type":"ContainerDied","Data":"cafbd1623b3e8bf221e8a7bb7d6850170dfd1490fa1e672540321dce481ff806"} Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.089069 4970 scope.go:117] "RemoveContainer" containerID="cc5397017d026aa15653bab3c0a6d6dfe3d0f4f721a579d13634247136198f1e" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.097539 4970 generic.go:334] "Generic (PLEG): container finished" podID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerID="298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024" exitCode=0 Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.097707 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerDied","Data":"298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024"} Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.097747 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7e7aef90-339c-401e-bd95-23cbe44d09b0","Type":"ContainerDied","Data":"596de810f5ea01a4a798864d45741fabf3ab293fec13546d661602c5b9f3118b"} Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.097815 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.104882 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-config-data" (OuterVolumeSpecName: "config-data") pod "7e7aef90-339c-401e-bd95-23cbe44d09b0" (UID: "7e7aef90-339c-401e-bd95-23cbe44d09b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.148454 4970 scope.go:117] "RemoveContainer" containerID="a70d18816834334e17601ae6c1f171e30ccf792a075a7a3c0dbdbfa58989b97d" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.148872 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.152092 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rg4sn"] Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.169391 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-rg4sn"] Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.176656 4970 scope.go:117] "RemoveContainer" containerID="27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.179388 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e7aef90-339c-401e-bd95-23cbe44d09b0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.206683 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zglb2"] Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.211915 4970 scope.go:117] "RemoveContainer" containerID="7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.233068 4970 scope.go:117] "RemoveContainer" containerID="54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.257482 4970 scope.go:117] "RemoveContainer" containerID="298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.282838 4970 scope.go:117] "RemoveContainer" containerID="27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.283307 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3\": container with ID starting with 27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3 not found: ID does not exist" containerID="27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.283359 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3"} err="failed to get container status \"27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3\": rpc error: code = NotFound desc = could not find container \"27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3\": container with ID starting with 27bd8d84a0b30f5abf2ac517c90361c0bfd2dc166b1be289d398dfba072a86f3 not found: ID does not exist" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.283389 4970 scope.go:117] "RemoveContainer" containerID="7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.283730 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38\": container with ID starting with 7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38 not found: ID does not exist" containerID="7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.283770 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38"} err="failed to get container status \"7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38\": rpc error: code = NotFound desc = could not find container \"7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38\": container with ID starting with 7893c4594cb7bcc94fdc4667c7f740634c6f1c92fd19ffaa2bff9e67db69cb38 not found: ID does not exist" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.283790 4970 scope.go:117] "RemoveContainer" containerID="54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.284025 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb\": container with ID starting with 54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb not found: ID does not exist" containerID="54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.284054 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb"} err="failed to get container status \"54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb\": rpc error: code = NotFound desc = could not find container \"54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb\": container with ID starting with 54d19a9988f2ee26b7850929934751b4ca15b943d00647dc5400d4fca4bea6fb not found: ID does not exist" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.284074 4970 scope.go:117] "RemoveContainer" containerID="298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.284305 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024\": container with ID starting with 298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024 not found: ID does not exist" containerID="298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.284334 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024"} err="failed to get container status \"298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024\": rpc error: code = NotFound desc = could not find container \"298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024\": container with ID starting with 298882aca20897d15ec32b53ecc2e9b9a5b7c690811e7466064b4697561fb024 not found: ID does not exist" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.442025 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.456616 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.492695 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.493385 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-notification-agent" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493406 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-notification-agent" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.493425 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="proxy-httpd" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493432 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="proxy-httpd" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.493447 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-central-agent" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493455 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-central-agent" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.493481 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" containerName="dnsmasq-dns" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493490 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" containerName="dnsmasq-dns" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.493500 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="sg-core" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493508 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="sg-core" Dec 09 12:31:41 crc kubenswrapper[4970]: E1209 12:31:41.493533 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" containerName="init" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493541 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" containerName="init" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493849 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="sg-core" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493875 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-notification-agent" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493904 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" containerName="dnsmasq-dns" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493919 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="proxy-httpd" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.493937 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" containerName="ceilometer-central-agent" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.496702 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.500800 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.501663 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.560849 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592152 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dx8\" (UniqueName: \"kubernetes.io/projected/ef6fc532-3039-4d07-9c30-c4466de25b41-kube-api-access-59dx8\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592277 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592314 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592458 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592482 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-config-data\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592581 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.592627 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-scripts\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.694916 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dx8\" (UniqueName: \"kubernetes.io/projected/ef6fc532-3039-4d07-9c30-c4466de25b41-kube-api-access-59dx8\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695064 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695104 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695245 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-config-data\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695337 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695765 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-log-httpd\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695883 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-run-httpd\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.695388 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-scripts\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.700741 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.700963 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-scripts\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.701166 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-config-data\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.705063 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.712823 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dx8\" (UniqueName: \"kubernetes.io/projected/ef6fc532-3039-4d07-9c30-c4466de25b41-kube-api-access-59dx8\") pod \"ceilometer-0\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " pod="openstack/ceilometer-0" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.832403 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e7aef90-339c-401e-bd95-23cbe44d09b0" path="/var/lib/kubelet/pods/7e7aef90-339c-401e-bd95-23cbe44d09b0/volumes" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.833511 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d531e31f-a903-40b8-b91f-0579c272cb87" path="/var/lib/kubelet/pods/d531e31f-a903-40b8-b91f-0579c272cb87/volumes" Dec 09 12:31:41 crc kubenswrapper[4970]: I1209 12:31:41.842951 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:31:42 crc kubenswrapper[4970]: W1209 12:31:42.330090 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6fc532_3039_4d07_9c30_c4466de25b41.slice/crio-66fdd374ad9774c1a74dec6b04ee312d5d1a6faf8dda4f63dddf92a560859331 WatchSource:0}: Error finding container 66fdd374ad9774c1a74dec6b04ee312d5d1a6faf8dda4f63dddf92a560859331: Status 404 returned error can't find the container with id 66fdd374ad9774c1a74dec6b04ee312d5d1a6faf8dda4f63dddf92a560859331 Dec 09 12:31:42 crc kubenswrapper[4970]: I1209 12:31:42.333957 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.139635 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerStarted","Data":"66fdd374ad9774c1a74dec6b04ee312d5d1a6faf8dda4f63dddf92a560859331"} Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.139681 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zglb2" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="registry-server" containerID="cri-o://7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d" gracePeriod=2 Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.747911 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.944418 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-catalog-content\") pod \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.944590 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c45bz\" (UniqueName: \"kubernetes.io/projected/c48899b9-165a-4054-a1eb-47b69a0fc3c2-kube-api-access-c45bz\") pod \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.944770 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-utilities\") pod \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\" (UID: \"c48899b9-165a-4054-a1eb-47b69a0fc3c2\") " Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.945315 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-utilities" (OuterVolumeSpecName: "utilities") pod "c48899b9-165a-4054-a1eb-47b69a0fc3c2" (UID: "c48899b9-165a-4054-a1eb-47b69a0fc3c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.945463 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.948355 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c48899b9-165a-4054-a1eb-47b69a0fc3c2-kube-api-access-c45bz" (OuterVolumeSpecName: "kube-api-access-c45bz") pod "c48899b9-165a-4054-a1eb-47b69a0fc3c2" (UID: "c48899b9-165a-4054-a1eb-47b69a0fc3c2"). InnerVolumeSpecName "kube-api-access-c45bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:43 crc kubenswrapper[4970]: I1209 12:31:43.973210 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c48899b9-165a-4054-a1eb-47b69a0fc3c2" (UID: "c48899b9-165a-4054-a1eb-47b69a0fc3c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.047831 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c48899b9-165a-4054-a1eb-47b69a0fc3c2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.048042 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c45bz\" (UniqueName: \"kubernetes.io/projected/c48899b9-165a-4054-a1eb-47b69a0fc3c2-kube-api-access-c45bz\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.153450 4970 generic.go:334] "Generic (PLEG): container finished" podID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerID="7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d" exitCode=0 Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.153541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zglb2" event={"ID":"c48899b9-165a-4054-a1eb-47b69a0fc3c2","Type":"ContainerDied","Data":"7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d"} Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.153863 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zglb2" event={"ID":"c48899b9-165a-4054-a1eb-47b69a0fc3c2","Type":"ContainerDied","Data":"a33801614c9e6086c5918d4b4eeb8ee481cc5c559b48a67f902b2acdddf73470"} Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.153542 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zglb2" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.153884 4970 scope.go:117] "RemoveContainer" containerID="7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.158055 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerStarted","Data":"35700f5e0591bcf4491ae3ed28499f0cca93c3bfa6b73e0a50706bd68eb02080"} Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.158100 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerStarted","Data":"1a09ca83293350dbbb6948ab9b2205a7b7f94793cfa6a3792656860bc8345bb9"} Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.160380 4970 generic.go:334] "Generic (PLEG): container finished" podID="68af4d13-2a15-420f-84b9-a0ebec93ac59" containerID="05eb626e45b3b8c8b61958e26d5c4aa8693cf6bb7833af2a2e7aa2fb9fc72b29" exitCode=0 Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.161336 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jxqtz" event={"ID":"68af4d13-2a15-420f-84b9-a0ebec93ac59","Type":"ContainerDied","Data":"05eb626e45b3b8c8b61958e26d5c4aa8693cf6bb7833af2a2e7aa2fb9fc72b29"} Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.184700 4970 scope.go:117] "RemoveContainer" containerID="6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.206558 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zglb2"] Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.216215 4970 scope.go:117] "RemoveContainer" containerID="5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.219625 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zglb2"] Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.239039 4970 scope.go:117] "RemoveContainer" containerID="7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d" Dec 09 12:31:44 crc kubenswrapper[4970]: E1209 12:31:44.239578 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d\": container with ID starting with 7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d not found: ID does not exist" containerID="7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.239612 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d"} err="failed to get container status \"7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d\": rpc error: code = NotFound desc = could not find container \"7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d\": container with ID starting with 7525c0cce1a52bcb451fd120d909322332dee376ef182183be62b40aa577b73d not found: ID does not exist" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.239634 4970 scope.go:117] "RemoveContainer" containerID="6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3" Dec 09 12:31:44 crc kubenswrapper[4970]: E1209 12:31:44.239935 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3\": container with ID starting with 6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3 not found: ID does not exist" containerID="6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.239957 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3"} err="failed to get container status \"6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3\": rpc error: code = NotFound desc = could not find container \"6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3\": container with ID starting with 6bf317f4a1054e3f00e07e4ce5351a673cbab84c89b1cd9a47128eb2ee4d84d3 not found: ID does not exist" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.239970 4970 scope.go:117] "RemoveContainer" containerID="5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af" Dec 09 12:31:44 crc kubenswrapper[4970]: E1209 12:31:44.240352 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af\": container with ID starting with 5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af not found: ID does not exist" containerID="5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af" Dec 09 12:31:44 crc kubenswrapper[4970]: I1209 12:31:44.240383 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af"} err="failed to get container status \"5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af\": rpc error: code = NotFound desc = could not find container \"5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af\": container with ID starting with 5021a154484b861476a1ba411f5046455b548df7cd6d082d925debb7508577af not found: ID does not exist" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.178165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerStarted","Data":"2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d"} Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.653557 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.803798 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-combined-ca-bundle\") pod \"68af4d13-2a15-420f-84b9-a0ebec93ac59\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.804001 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-config-data\") pod \"68af4d13-2a15-420f-84b9-a0ebec93ac59\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.804046 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2fnr\" (UniqueName: \"kubernetes.io/projected/68af4d13-2a15-420f-84b9-a0ebec93ac59-kube-api-access-s2fnr\") pod \"68af4d13-2a15-420f-84b9-a0ebec93ac59\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.804208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-scripts\") pod \"68af4d13-2a15-420f-84b9-a0ebec93ac59\" (UID: \"68af4d13-2a15-420f-84b9-a0ebec93ac59\") " Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.820391 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-scripts" (OuterVolumeSpecName: "scripts") pod "68af4d13-2a15-420f-84b9-a0ebec93ac59" (UID: "68af4d13-2a15-420f-84b9-a0ebec93ac59"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.824632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68af4d13-2a15-420f-84b9-a0ebec93ac59-kube-api-access-s2fnr" (OuterVolumeSpecName: "kube-api-access-s2fnr") pod "68af4d13-2a15-420f-84b9-a0ebec93ac59" (UID: "68af4d13-2a15-420f-84b9-a0ebec93ac59"). InnerVolumeSpecName "kube-api-access-s2fnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.840173 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68af4d13-2a15-420f-84b9-a0ebec93ac59" (UID: "68af4d13-2a15-420f-84b9-a0ebec93ac59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.843004 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" path="/var/lib/kubelet/pods/c48899b9-165a-4054-a1eb-47b69a0fc3c2/volumes" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.845086 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-config-data" (OuterVolumeSpecName: "config-data") pod "68af4d13-2a15-420f-84b9-a0ebec93ac59" (UID: "68af4d13-2a15-420f-84b9-a0ebec93ac59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.908545 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.908585 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.908599 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2fnr\" (UniqueName: \"kubernetes.io/projected/68af4d13-2a15-420f-84b9-a0ebec93ac59-kube-api-access-s2fnr\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:45 crc kubenswrapper[4970]: I1209 12:31:45.908610 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/68af4d13-2a15-420f-84b9-a0ebec93ac59-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.010458 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.010505 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.201887 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-jxqtz" event={"ID":"68af4d13-2a15-420f-84b9-a0ebec93ac59","Type":"ContainerDied","Data":"a41a7199e70c66597ad3965a53c28714e69e4bc807b7c2b8fe1a828fc965b795"} Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.201934 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a41a7199e70c66597ad3965a53c28714e69e4bc807b7c2b8fe1a828fc965b795" Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.201943 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-jxqtz" Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.372129 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.372493 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-log" containerID="cri-o://341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f" gracePeriod=30 Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.372556 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-api" containerID="cri-o://df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6" gracePeriod=30 Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.400412 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.400705 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8e215775-0586-4a94-a8b8-faae5dcf279b" containerName="nova-scheduler-scheduler" containerID="cri-o://e005657ebb8c94bb91f81321483dcb5d4bbf04364ec571a7880c9cabd3ac483f" gracePeriod=30 Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.423727 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.423943 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-log" containerID="cri-o://58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a" gracePeriod=30 Dec 09 12:31:46 crc kubenswrapper[4970]: I1209 12:31:46.424429 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-metadata" containerID="cri-o://9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae" gracePeriod=30 Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.115142 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213165 4970 generic.go:334] "Generic (PLEG): container finished" podID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerID="df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6" exitCode=0 Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213192 4970 generic.go:334] "Generic (PLEG): container finished" podID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerID="341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f" exitCode=143 Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213235 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213240 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f32da17a-39b3-4e8e-82f5-d786aeb16266","Type":"ContainerDied","Data":"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6"} Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213359 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f32da17a-39b3-4e8e-82f5-d786aeb16266","Type":"ContainerDied","Data":"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f"} Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213372 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f32da17a-39b3-4e8e-82f5-d786aeb16266","Type":"ContainerDied","Data":"f1db694c63ce996be9f64ba24d1b13d4758d75abeacfb19b1b210f5beb78aedc"} Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.213390 4970 scope.go:117] "RemoveContainer" containerID="df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.220562 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerStarted","Data":"0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991"} Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.220783 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.224589 4970 generic.go:334] "Generic (PLEG): container finished" podID="da82b114-dc90-4454-9b42-711065681a68" containerID="58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a" exitCode=143 Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.224620 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"da82b114-dc90-4454-9b42-711065681a68","Type":"ContainerDied","Data":"58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a"} Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.238897 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-public-tls-certs\") pod \"f32da17a-39b3-4e8e-82f5-d786aeb16266\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.238986 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f32da17a-39b3-4e8e-82f5-d786aeb16266-logs\") pod \"f32da17a-39b3-4e8e-82f5-d786aeb16266\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.239206 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-config-data\") pod \"f32da17a-39b3-4e8e-82f5-d786aeb16266\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.239273 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-internal-tls-certs\") pod \"f32da17a-39b3-4e8e-82f5-d786aeb16266\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.239298 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f32da17a-39b3-4e8e-82f5-d786aeb16266-logs" (OuterVolumeSpecName: "logs") pod "f32da17a-39b3-4e8e-82f5-d786aeb16266" (UID: "f32da17a-39b3-4e8e-82f5-d786aeb16266"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.239315 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2k7t\" (UniqueName: \"kubernetes.io/projected/f32da17a-39b3-4e8e-82f5-d786aeb16266-kube-api-access-w2k7t\") pod \"f32da17a-39b3-4e8e-82f5-d786aeb16266\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.239391 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-combined-ca-bundle\") pod \"f32da17a-39b3-4e8e-82f5-d786aeb16266\" (UID: \"f32da17a-39b3-4e8e-82f5-d786aeb16266\") " Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.240525 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f32da17a-39b3-4e8e-82f5-d786aeb16266-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.254858 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.064932518 podStartE2EDuration="6.254808995s" podCreationTimestamp="2025-12-09 12:31:41 +0000 UTC" firstStartedPulling="2025-12-09 12:31:42.33335339 +0000 UTC m=+1514.893834451" lastFinishedPulling="2025-12-09 12:31:46.523229877 +0000 UTC m=+1519.083710928" observedRunningTime="2025-12-09 12:31:47.25161619 +0000 UTC m=+1519.812097241" watchObservedRunningTime="2025-12-09 12:31:47.254808995 +0000 UTC m=+1519.815290046" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.259561 4970 scope.go:117] "RemoveContainer" containerID="341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.262526 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f32da17a-39b3-4e8e-82f5-d786aeb16266-kube-api-access-w2k7t" (OuterVolumeSpecName: "kube-api-access-w2k7t") pod "f32da17a-39b3-4e8e-82f5-d786aeb16266" (UID: "f32da17a-39b3-4e8e-82f5-d786aeb16266"). InnerVolumeSpecName "kube-api-access-w2k7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.272319 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f32da17a-39b3-4e8e-82f5-d786aeb16266" (UID: "f32da17a-39b3-4e8e-82f5-d786aeb16266"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.299350 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-config-data" (OuterVolumeSpecName: "config-data") pod "f32da17a-39b3-4e8e-82f5-d786aeb16266" (UID: "f32da17a-39b3-4e8e-82f5-d786aeb16266"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.320368 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f32da17a-39b3-4e8e-82f5-d786aeb16266" (UID: "f32da17a-39b3-4e8e-82f5-d786aeb16266"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.339542 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f32da17a-39b3-4e8e-82f5-d786aeb16266" (UID: "f32da17a-39b3-4e8e-82f5-d786aeb16266"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.350345 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.350381 4970 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.350394 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.350409 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2k7t\" (UniqueName: \"kubernetes.io/projected/f32da17a-39b3-4e8e-82f5-d786aeb16266-kube-api-access-w2k7t\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.350420 4970 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f32da17a-39b3-4e8e-82f5-d786aeb16266-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.414987 4970 scope.go:117] "RemoveContainer" containerID="df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.416478 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6\": container with ID starting with df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6 not found: ID does not exist" containerID="df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.416518 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6"} err="failed to get container status \"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6\": rpc error: code = NotFound desc = could not find container \"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6\": container with ID starting with df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6 not found: ID does not exist" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.416544 4970 scope.go:117] "RemoveContainer" containerID="341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.418555 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f\": container with ID starting with 341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f not found: ID does not exist" containerID="341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.418600 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f"} err="failed to get container status \"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f\": rpc error: code = NotFound desc = could not find container \"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f\": container with ID starting with 341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f not found: ID does not exist" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.418635 4970 scope.go:117] "RemoveContainer" containerID="df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.418971 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6"} err="failed to get container status \"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6\": rpc error: code = NotFound desc = could not find container \"df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6\": container with ID starting with df27f00daa90137245ad600c88b49219ce2868855b80a4b61f8d8891743089c6 not found: ID does not exist" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.419004 4970 scope.go:117] "RemoveContainer" containerID="341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.420054 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f"} err="failed to get container status \"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f\": rpc error: code = NotFound desc = could not find container \"341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f\": container with ID starting with 341a63c14d779ef157e64603ee3eeedbbc18ffbe8ecf6092ea1ebd9d9313de6f not found: ID does not exist" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.549906 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.565171 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.582929 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.586447 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-api" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586488 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-api" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.586524 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="extract-content" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586533 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="extract-content" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.586597 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-log" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586604 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-log" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.586614 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68af4d13-2a15-420f-84b9-a0ebec93ac59" containerName="nova-manage" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586621 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="68af4d13-2a15-420f-84b9-a0ebec93ac59" containerName="nova-manage" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.586644 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="extract-utilities" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586651 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="extract-utilities" Dec 09 12:31:47 crc kubenswrapper[4970]: E1209 12:31:47.586664 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="registry-server" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586670 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="registry-server" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586933 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-api" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586947 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="68af4d13-2a15-420f-84b9-a0ebec93ac59" containerName="nova-manage" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586968 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c48899b9-165a-4054-a1eb-47b69a0fc3c2" containerName="registry-server" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.586977 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" containerName="nova-api-log" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.588688 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.595697 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.595697 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.599778 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.602728 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.655612 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7313fcc0-6c4b-4008-a232-d1d8a351fa13-logs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.655681 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-config-data\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.655736 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-public-tls-certs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.655826 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfczj\" (UniqueName: \"kubernetes.io/projected/7313fcc0-6c4b-4008-a232-d1d8a351fa13-kube-api-access-hfczj\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.655863 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.655883 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.756925 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfczj\" (UniqueName: \"kubernetes.io/projected/7313fcc0-6c4b-4008-a232-d1d8a351fa13-kube-api-access-hfczj\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.756988 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.757008 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.757060 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7313fcc0-6c4b-4008-a232-d1d8a351fa13-logs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.757095 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-config-data\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.757135 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-public-tls-certs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.760968 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7313fcc0-6c4b-4008-a232-d1d8a351fa13-logs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.762854 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-public-tls-certs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.763131 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.763346 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-config-data\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.763956 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7313fcc0-6c4b-4008-a232-d1d8a351fa13-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.774192 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfczj\" (UniqueName: \"kubernetes.io/projected/7313fcc0-6c4b-4008-a232-d1d8a351fa13-kube-api-access-hfczj\") pod \"nova-api-0\" (UID: \"7313fcc0-6c4b-4008-a232-d1d8a351fa13\") " pod="openstack/nova-api-0" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.825915 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f32da17a-39b3-4e8e-82f5-d786aeb16266" path="/var/lib/kubelet/pods/f32da17a-39b3-4e8e-82f5-d786aeb16266/volumes" Dec 09 12:31:47 crc kubenswrapper[4970]: I1209 12:31:47.926071 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.243751 4970 generic.go:334] "Generic (PLEG): container finished" podID="8e215775-0586-4a94-a8b8-faae5dcf279b" containerID="e005657ebb8c94bb91f81321483dcb5d4bbf04364ec571a7880c9cabd3ac483f" exitCode=0 Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.243835 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e215775-0586-4a94-a8b8-faae5dcf279b","Type":"ContainerDied","Data":"e005657ebb8c94bb91f81321483dcb5d4bbf04364ec571a7880c9cabd3ac483f"} Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.307679 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.376842 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j57g7\" (UniqueName: \"kubernetes.io/projected/8e215775-0586-4a94-a8b8-faae5dcf279b-kube-api-access-j57g7\") pod \"8e215775-0586-4a94-a8b8-faae5dcf279b\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.376906 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-combined-ca-bundle\") pod \"8e215775-0586-4a94-a8b8-faae5dcf279b\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.377006 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-config-data\") pod \"8e215775-0586-4a94-a8b8-faae5dcf279b\" (UID: \"8e215775-0586-4a94-a8b8-faae5dcf279b\") " Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.392516 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e215775-0586-4a94-a8b8-faae5dcf279b-kube-api-access-j57g7" (OuterVolumeSpecName: "kube-api-access-j57g7") pod "8e215775-0586-4a94-a8b8-faae5dcf279b" (UID: "8e215775-0586-4a94-a8b8-faae5dcf279b"). InnerVolumeSpecName "kube-api-access-j57g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.466450 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-config-data" (OuterVolumeSpecName: "config-data") pod "8e215775-0586-4a94-a8b8-faae5dcf279b" (UID: "8e215775-0586-4a94-a8b8-faae5dcf279b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.472835 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e215775-0586-4a94-a8b8-faae5dcf279b" (UID: "8e215775-0586-4a94-a8b8-faae5dcf279b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.486592 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j57g7\" (UniqueName: \"kubernetes.io/projected/8e215775-0586-4a94-a8b8-faae5dcf279b-kube-api-access-j57g7\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.486630 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.486643 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e215775-0586-4a94-a8b8-faae5dcf279b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:48 crc kubenswrapper[4970]: I1209 12:31:48.519193 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.265877 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8e215775-0586-4a94-a8b8-faae5dcf279b","Type":"ContainerDied","Data":"9aab0f2788fc38ae8f610addc53c2d5d11c55976d635b85c17cff257390f2708"} Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.266143 4970 scope.go:117] "RemoveContainer" containerID="e005657ebb8c94bb91f81321483dcb5d4bbf04364ec571a7880c9cabd3ac483f" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.265904 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.269099 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7313fcc0-6c4b-4008-a232-d1d8a351fa13","Type":"ContainerStarted","Data":"bbb7a6a433e34c13dbc9bbfaead7b9d907cbbd1be89270baaf0ce0d752013624"} Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.269165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7313fcc0-6c4b-4008-a232-d1d8a351fa13","Type":"ContainerStarted","Data":"b50d00256371f0bc72679b33d2158284be368f70ab11d71f21c468742e080492"} Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.269185 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7313fcc0-6c4b-4008-a232-d1d8a351fa13","Type":"ContainerStarted","Data":"3d92bfda34dd0658e39a38318b52f8d9647248dbc0b8a614592eb0ded0f00907"} Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.330337 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.330303574 podStartE2EDuration="2.330303574s" podCreationTimestamp="2025-12-09 12:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:49.316055695 +0000 UTC m=+1521.876536776" watchObservedRunningTime="2025-12-09 12:31:49.330303574 +0000 UTC m=+1521.890784625" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.359672 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.370398 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.380696 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:49 crc kubenswrapper[4970]: E1209 12:31:49.381338 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e215775-0586-4a94-a8b8-faae5dcf279b" containerName="nova-scheduler-scheduler" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.381360 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e215775-0586-4a94-a8b8-faae5dcf279b" containerName="nova-scheduler-scheduler" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.381739 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e215775-0586-4a94-a8b8-faae5dcf279b" containerName="nova-scheduler-scheduler" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.382820 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.387417 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.391355 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.409162 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b2ee8d-3020-4b93-80f5-43070a0d4384-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.409236 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b2ee8d-3020-4b93-80f5-43070a0d4384-config-data\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.409414 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml86g\" (UniqueName: \"kubernetes.io/projected/29b2ee8d-3020-4b93-80f5-43070a0d4384-kube-api-access-ml86g\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.511678 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b2ee8d-3020-4b93-80f5-43070a0d4384-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.511734 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b2ee8d-3020-4b93-80f5-43070a0d4384-config-data\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.511794 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml86g\" (UniqueName: \"kubernetes.io/projected/29b2ee8d-3020-4b93-80f5-43070a0d4384-kube-api-access-ml86g\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.527201 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29b2ee8d-3020-4b93-80f5-43070a0d4384-config-data\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.527291 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29b2ee8d-3020-4b93-80f5-43070a0d4384-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.530711 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml86g\" (UniqueName: \"kubernetes.io/projected/29b2ee8d-3020-4b93-80f5-43070a0d4384-kube-api-access-ml86g\") pod \"nova-scheduler-0\" (UID: \"29b2ee8d-3020-4b93-80f5-43070a0d4384\") " pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.703464 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 12:31:49 crc kubenswrapper[4970]: I1209 12:31:49.841299 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e215775-0586-4a94-a8b8-faae5dcf279b" path="/var/lib/kubelet/pods/8e215775-0586-4a94-a8b8-faae5dcf279b/volumes" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.164390 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.276684 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.297862 4970 generic.go:334] "Generic (PLEG): container finished" podID="da82b114-dc90-4454-9b42-711065681a68" containerID="9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae" exitCode=0 Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.298028 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.298012 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"da82b114-dc90-4454-9b42-711065681a68","Type":"ContainerDied","Data":"9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae"} Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.298628 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"da82b114-dc90-4454-9b42-711065681a68","Type":"ContainerDied","Data":"fec98b4f8c5fc96bf533fc7fb76aa6cf866858fd3fd1b30597264bc11aafecec"} Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.298691 4970 scope.go:117] "RemoveContainer" containerID="9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.302532 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29b2ee8d-3020-4b93-80f5-43070a0d4384","Type":"ContainerStarted","Data":"c6a2fbbed6b30d80d3649e718a2e50082fbe83834a53291193b61302779a4167"} Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.329213 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftvdz\" (UniqueName: \"kubernetes.io/projected/da82b114-dc90-4454-9b42-711065681a68-kube-api-access-ftvdz\") pod \"da82b114-dc90-4454-9b42-711065681a68\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.329370 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da82b114-dc90-4454-9b42-711065681a68-logs\") pod \"da82b114-dc90-4454-9b42-711065681a68\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.329723 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-nova-metadata-tls-certs\") pod \"da82b114-dc90-4454-9b42-711065681a68\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.330131 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-combined-ca-bundle\") pod \"da82b114-dc90-4454-9b42-711065681a68\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.330421 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-config-data\") pod \"da82b114-dc90-4454-9b42-711065681a68\" (UID: \"da82b114-dc90-4454-9b42-711065681a68\") " Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.330716 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da82b114-dc90-4454-9b42-711065681a68-logs" (OuterVolumeSpecName: "logs") pod "da82b114-dc90-4454-9b42-711065681a68" (UID: "da82b114-dc90-4454-9b42-711065681a68"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.331150 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/da82b114-dc90-4454-9b42-711065681a68-logs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.350967 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da82b114-dc90-4454-9b42-711065681a68-kube-api-access-ftvdz" (OuterVolumeSpecName: "kube-api-access-ftvdz") pod "da82b114-dc90-4454-9b42-711065681a68" (UID: "da82b114-dc90-4454-9b42-711065681a68"). InnerVolumeSpecName "kube-api-access-ftvdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.376444 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-config-data" (OuterVolumeSpecName: "config-data") pod "da82b114-dc90-4454-9b42-711065681a68" (UID: "da82b114-dc90-4454-9b42-711065681a68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.384172 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da82b114-dc90-4454-9b42-711065681a68" (UID: "da82b114-dc90-4454-9b42-711065681a68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.397949 4970 scope.go:117] "RemoveContainer" containerID="58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.414797 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "da82b114-dc90-4454-9b42-711065681a68" (UID: "da82b114-dc90-4454-9b42-711065681a68"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.427776 4970 scope.go:117] "RemoveContainer" containerID="9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae" Dec 09 12:31:50 crc kubenswrapper[4970]: E1209 12:31:50.428181 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae\": container with ID starting with 9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae not found: ID does not exist" containerID="9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.428211 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae"} err="failed to get container status \"9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae\": rpc error: code = NotFound desc = could not find container \"9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae\": container with ID starting with 9ac84822d908b06a59f3d9ae48c695ab4306dd4b4481fe94881998e10d01b8ae not found: ID does not exist" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.428238 4970 scope.go:117] "RemoveContainer" containerID="58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a" Dec 09 12:31:50 crc kubenswrapper[4970]: E1209 12:31:50.428732 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a\": container with ID starting with 58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a not found: ID does not exist" containerID="58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.428770 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a"} err="failed to get container status \"58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a\": rpc error: code = NotFound desc = could not find container \"58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a\": container with ID starting with 58bbf1ae8555ee8d88a6accbd38fca4c0ea041cde6a7281889e8459dada5760a not found: ID does not exist" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.436814 4970 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.436840 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.436850 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da82b114-dc90-4454-9b42-711065681a68-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.436859 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftvdz\" (UniqueName: \"kubernetes.io/projected/da82b114-dc90-4454-9b42-711065681a68-kube-api-access-ftvdz\") on node \"crc\" DevicePath \"\"" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.646835 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.660958 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.674643 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:50 crc kubenswrapper[4970]: E1209 12:31:50.675293 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-log" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.675319 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-log" Dec 09 12:31:50 crc kubenswrapper[4970]: E1209 12:31:50.675362 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-metadata" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.675371 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-metadata" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.675643 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-log" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.675685 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-metadata" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.677219 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.680787 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.681102 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.687486 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.845466 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8kx6\" (UniqueName: \"kubernetes.io/projected/84a18921-52f6-4481-b8ea-cb0f41219e9e-kube-api-access-x8kx6\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.845802 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-config-data\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.845858 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.845916 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.846007 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84a18921-52f6-4481-b8ea-cb0f41219e9e-logs\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.948759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84a18921-52f6-4481-b8ea-cb0f41219e9e-logs\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.948893 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8kx6\" (UniqueName: \"kubernetes.io/projected/84a18921-52f6-4481-b8ea-cb0f41219e9e-kube-api-access-x8kx6\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.949099 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-config-data\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.949142 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.949183 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.951407 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84a18921-52f6-4481-b8ea-cb0f41219e9e-logs\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.956993 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.957156 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.962118 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84a18921-52f6-4481-b8ea-cb0f41219e9e-config-data\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:50 crc kubenswrapper[4970]: I1209 12:31:50.968337 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8kx6\" (UniqueName: \"kubernetes.io/projected/84a18921-52f6-4481-b8ea-cb0f41219e9e-kube-api-access-x8kx6\") pod \"nova-metadata-0\" (UID: \"84a18921-52f6-4481-b8ea-cb0f41219e9e\") " pod="openstack/nova-metadata-0" Dec 09 12:31:51 crc kubenswrapper[4970]: I1209 12:31:51.002009 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 12:31:51 crc kubenswrapper[4970]: I1209 12:31:51.321987 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"29b2ee8d-3020-4b93-80f5-43070a0d4384","Type":"ContainerStarted","Data":"df269dcb80507cb968049ffdf4789fef605b9275b5dc43d6bd135a1d05e21a42"} Dec 09 12:31:51 crc kubenswrapper[4970]: I1209 12:31:51.357175 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.35715361 podStartE2EDuration="2.35715361s" podCreationTimestamp="2025-12-09 12:31:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:51.342581333 +0000 UTC m=+1523.903062404" watchObservedRunningTime="2025-12-09 12:31:51.35715361 +0000 UTC m=+1523.917634681" Dec 09 12:31:51 crc kubenswrapper[4970]: I1209 12:31:51.484080 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 12:31:51 crc kubenswrapper[4970]: W1209 12:31:51.494423 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84a18921_52f6_4481_b8ea_cb0f41219e9e.slice/crio-7f4568515b3e4c01cffa11c1be94888d89992092991023a74eebbf62c8ae94e3 WatchSource:0}: Error finding container 7f4568515b3e4c01cffa11c1be94888d89992092991023a74eebbf62c8ae94e3: Status 404 returned error can't find the container with id 7f4568515b3e4c01cffa11c1be94888d89992092991023a74eebbf62c8ae94e3 Dec 09 12:31:51 crc kubenswrapper[4970]: I1209 12:31:51.824789 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da82b114-dc90-4454-9b42-711065681a68" path="/var/lib/kubelet/pods/da82b114-dc90-4454-9b42-711065681a68/volumes" Dec 09 12:31:52 crc kubenswrapper[4970]: I1209 12:31:52.334944 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84a18921-52f6-4481-b8ea-cb0f41219e9e","Type":"ContainerStarted","Data":"ec17b64bc65a7569d93b9e9b2af3963b1d6d67a4e2fb8a2ec08b133573c86c24"} Dec 09 12:31:52 crc kubenswrapper[4970]: I1209 12:31:52.335300 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84a18921-52f6-4481-b8ea-cb0f41219e9e","Type":"ContainerStarted","Data":"abb8f1ed70331738fe2591ff590afca142b9d452725f5ce4091a2ace93323253"} Dec 09 12:31:52 crc kubenswrapper[4970]: I1209 12:31:52.335317 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84a18921-52f6-4481-b8ea-cb0f41219e9e","Type":"ContainerStarted","Data":"7f4568515b3e4c01cffa11c1be94888d89992092991023a74eebbf62c8ae94e3"} Dec 09 12:31:52 crc kubenswrapper[4970]: I1209 12:31:52.369736 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.369709076 podStartE2EDuration="2.369709076s" podCreationTimestamp="2025-12-09 12:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:31:52.351292617 +0000 UTC m=+1524.911773668" watchObservedRunningTime="2025-12-09 12:31:52.369709076 +0000 UTC m=+1524.930190127" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.648668 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2q7bl"] Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.651802 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.664345 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q7bl"] Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.704308 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.830853 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqsh6\" (UniqueName: \"kubernetes.io/projected/56ac190c-b9e3-4453-9581-58141f4f59cc-kube-api-access-zqsh6\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.830916 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-catalog-content\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.831220 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-utilities\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.879578 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.879669 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="da82b114-dc90-4454-9b42-711065681a68" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.933717 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-utilities\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.933904 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqsh6\" (UniqueName: \"kubernetes.io/projected/56ac190c-b9e3-4453-9581-58141f4f59cc-kube-api-access-zqsh6\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.933935 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-catalog-content\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.934209 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-utilities\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.934339 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-catalog-content\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.953567 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqsh6\" (UniqueName: \"kubernetes.io/projected/56ac190c-b9e3-4453-9581-58141f4f59cc-kube-api-access-zqsh6\") pod \"redhat-operators-2q7bl\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:54 crc kubenswrapper[4970]: I1209 12:31:54.981071 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:31:55 crc kubenswrapper[4970]: W1209 12:31:55.490391 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864 WatchSource:0}: Error finding container 7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864: Status 404 returned error can't find the container with id 7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864 Dec 09 12:31:55 crc kubenswrapper[4970]: I1209 12:31:55.491094 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q7bl"] Dec 09 12:31:56 crc kubenswrapper[4970]: I1209 12:31:56.002611 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:56 crc kubenswrapper[4970]: I1209 12:31:56.003047 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 12:31:56 crc kubenswrapper[4970]: I1209 12:31:56.385616 4970 generic.go:334] "Generic (PLEG): container finished" podID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerID="836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9" exitCode=0 Dec 09 12:31:56 crc kubenswrapper[4970]: I1209 12:31:56.385660 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerDied","Data":"836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9"} Dec 09 12:31:56 crc kubenswrapper[4970]: I1209 12:31:56.385687 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerStarted","Data":"7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864"} Dec 09 12:31:57 crc kubenswrapper[4970]: I1209 12:31:57.926260 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:57 crc kubenswrapper[4970]: I1209 12:31:57.926916 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 12:31:58 crc kubenswrapper[4970]: I1209 12:31:58.413326 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerStarted","Data":"618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10"} Dec 09 12:31:58 crc kubenswrapper[4970]: I1209 12:31:58.940548 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7313fcc0-6c4b-4008-a232-d1d8a351fa13" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.253:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:58 crc kubenswrapper[4970]: I1209 12:31:58.940797 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7313fcc0-6c4b-4008-a232-d1d8a351fa13" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.253:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:31:59 crc kubenswrapper[4970]: I1209 12:31:59.703928 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 12:31:59 crc kubenswrapper[4970]: I1209 12:31:59.733518 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 12:32:00 crc kubenswrapper[4970]: I1209 12:32:00.462711 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 12:32:01 crc kubenswrapper[4970]: I1209 12:32:01.002678 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:32:01 crc kubenswrapper[4970]: I1209 12:32:01.002747 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.015433 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="84a18921-52f6-4481-b8ea-cb0f41219e9e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.255:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.015489 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="84a18921-52f6-4481-b8ea-cb0f41219e9e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.255:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.464452 4970 generic.go:334] "Generic (PLEG): container finished" podID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerID="618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10" exitCode=0 Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.464533 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerDied","Data":"618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10"} Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.468824 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.474367 4970 generic.go:334] "Generic (PLEG): container finished" podID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerID="c3dd4cafe8ded6821605100415bbd907372bebcb4dc387d255699781ad7ca0b2" exitCode=137 Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.474407 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerDied","Data":"c3dd4cafe8ded6821605100415bbd907372bebcb4dc387d255699781ad7ca0b2"} Dec 09 12:32:02 crc kubenswrapper[4970]: I1209 12:32:02.864032 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.029325 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-combined-ca-bundle\") pod \"01a42978-e19d-4fce-8974-de4926ff5ab8\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.029369 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-scripts\") pod \"01a42978-e19d-4fce-8974-de4926ff5ab8\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.029636 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-config-data\") pod \"01a42978-e19d-4fce-8974-de4926ff5ab8\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.029728 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx4vd\" (UniqueName: \"kubernetes.io/projected/01a42978-e19d-4fce-8974-de4926ff5ab8-kube-api-access-dx4vd\") pod \"01a42978-e19d-4fce-8974-de4926ff5ab8\" (UID: \"01a42978-e19d-4fce-8974-de4926ff5ab8\") " Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.035383 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-scripts" (OuterVolumeSpecName: "scripts") pod "01a42978-e19d-4fce-8974-de4926ff5ab8" (UID: "01a42978-e19d-4fce-8974-de4926ff5ab8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.052566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a42978-e19d-4fce-8974-de4926ff5ab8-kube-api-access-dx4vd" (OuterVolumeSpecName: "kube-api-access-dx4vd") pod "01a42978-e19d-4fce-8974-de4926ff5ab8" (UID: "01a42978-e19d-4fce-8974-de4926ff5ab8"). InnerVolumeSpecName "kube-api-access-dx4vd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.132664 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx4vd\" (UniqueName: \"kubernetes.io/projected/01a42978-e19d-4fce-8974-de4926ff5ab8-kube-api-access-dx4vd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.132821 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.200097 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-config-data" (OuterVolumeSpecName: "config-data") pod "01a42978-e19d-4fce-8974-de4926ff5ab8" (UID: "01a42978-e19d-4fce-8974-de4926ff5ab8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.235568 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.237826 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01a42978-e19d-4fce-8974-de4926ff5ab8" (UID: "01a42978-e19d-4fce-8974-de4926ff5ab8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.339293 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01a42978-e19d-4fce-8974-de4926ff5ab8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.355819 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zhfsk"] Dec 09 12:32:03 crc kubenswrapper[4970]: E1209 12:32:03.359707 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-evaluator" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.361851 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-evaluator" Dec 09 12:32:03 crc kubenswrapper[4970]: E1209 12:32:03.362018 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-notifier" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.362115 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-notifier" Dec 09 12:32:03 crc kubenswrapper[4970]: E1209 12:32:03.362220 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-api" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.362319 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-api" Dec 09 12:32:03 crc kubenswrapper[4970]: E1209 12:32:03.362436 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-listener" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.363654 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-listener" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.372609 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-listener" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.372648 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-evaluator" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.372686 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-notifier" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.372716 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" containerName="aodh-api" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.398872 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.401976 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zhfsk"] Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.493236 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"01a42978-e19d-4fce-8974-de4926ff5ab8","Type":"ContainerDied","Data":"ea78cee7fbdd5bc6a0afdc2f85f3bd850f23b6900dff41183695a62e8afb2c08"} Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.493334 4970 scope.go:117] "RemoveContainer" containerID="c3dd4cafe8ded6821605100415bbd907372bebcb4dc387d255699781ad7ca0b2" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.493613 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.543694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-utilities\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.544327 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-catalog-content\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.544390 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cbmz\" (UniqueName: \"kubernetes.io/projected/01c6f405-6870-472c-a861-86224e0c7b25-kube-api-access-2cbmz\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.565293 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.570631 4970 scope.go:117] "RemoveContainer" containerID="5e107555cea9bc62b23e22e0bc94c2ab2f627eb1a42578a265c133842c49171d" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.605110 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.621523 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.627686 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.634375 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.634573 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.634690 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-99pll" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.634801 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.634974 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.634998 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.641617 4970 scope.go:117] "RemoveContainer" containerID="7da0be7a31fd8c3bd419409236c312478c6b5b666f3b92607ba433fcd9387031" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.645873 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-catalog-content\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.645924 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cbmz\" (UniqueName: \"kubernetes.io/projected/01c6f405-6870-472c-a861-86224e0c7b25-kube-api-access-2cbmz\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.645963 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-utilities\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.646507 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-utilities\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.646733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-catalog-content\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.669784 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cbmz\" (UniqueName: \"kubernetes.io/projected/01c6f405-6870-472c-a861-86224e0c7b25-kube-api-access-2cbmz\") pod \"certified-operators-zhfsk\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.678507 4970 scope.go:117] "RemoveContainer" containerID="f69d64e0f8ac25f484f61971f1815ddb90fb341b9488ee1ef17095d4c60b39c6" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.730833 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.748874 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-config-data\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.749010 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-scripts\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.749038 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-combined-ca-bundle\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.749082 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-internal-tls-certs\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.749184 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-public-tls-certs\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.749232 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6pz\" (UniqueName: \"kubernetes.io/projected/316942aa-13a6-4e83-aff5-b4f54f43ef20-kube-api-access-pn6pz\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.852122 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a42978-e19d-4fce-8974-de4926ff5ab8" path="/var/lib/kubelet/pods/01a42978-e19d-4fce-8974-de4926ff5ab8/volumes" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.854947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-config-data\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.855059 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-scripts\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.855080 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-combined-ca-bundle\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.855115 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-internal-tls-certs\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.855177 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-public-tls-certs\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.855216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn6pz\" (UniqueName: \"kubernetes.io/projected/316942aa-13a6-4e83-aff5-b4f54f43ef20-kube-api-access-pn6pz\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.864672 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-scripts\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.866883 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-config-data\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.868329 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-combined-ca-bundle\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.871675 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-public-tls-certs\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.887175 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/316942aa-13a6-4e83-aff5-b4f54f43ef20-internal-tls-certs\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.907201 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn6pz\" (UniqueName: \"kubernetes.io/projected/316942aa-13a6-4e83-aff5-b4f54f43ef20-kube-api-access-pn6pz\") pod \"aodh-0\" (UID: \"316942aa-13a6-4e83-aff5-b4f54f43ef20\") " pod="openstack/aodh-0" Dec 09 12:32:03 crc kubenswrapper[4970]: I1209 12:32:03.970068 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 09 12:32:04 crc kubenswrapper[4970]: W1209 12:32:04.335762 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c6f405_6870_472c_a861_86224e0c7b25.slice/crio-78bcfe73c3b3d95efcafb104a7b4cb8f0b01d356570d25e21b76fe3bd2c847d5 WatchSource:0}: Error finding container 78bcfe73c3b3d95efcafb104a7b4cb8f0b01d356570d25e21b76fe3bd2c847d5: Status 404 returned error can't find the container with id 78bcfe73c3b3d95efcafb104a7b4cb8f0b01d356570d25e21b76fe3bd2c847d5 Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.342634 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zhfsk"] Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.518649 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerStarted","Data":"bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498"} Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.522711 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerStarted","Data":"78bcfe73c3b3d95efcafb104a7b4cb8f0b01d356570d25e21b76fe3bd2c847d5"} Dec 09 12:32:04 crc kubenswrapper[4970]: W1209 12:32:04.547197 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod316942aa_13a6_4e83_aff5_b4f54f43ef20.slice/crio-4bd4716321b671f8d31a2906c04071a71d9e57a1a789fb120810e95a3c8b9ea5 WatchSource:0}: Error finding container 4bd4716321b671f8d31a2906c04071a71d9e57a1a789fb120810e95a3c8b9ea5: Status 404 returned error can't find the container with id 4bd4716321b671f8d31a2906c04071a71d9e57a1a789fb120810e95a3c8b9ea5 Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.552377 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.553157 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2q7bl" podStartSLOduration=3.81781508 podStartE2EDuration="10.553139869s" podCreationTimestamp="2025-12-09 12:31:54 +0000 UTC" firstStartedPulling="2025-12-09 12:31:56.387687862 +0000 UTC m=+1528.948168913" lastFinishedPulling="2025-12-09 12:32:03.123012651 +0000 UTC m=+1535.683493702" observedRunningTime="2025-12-09 12:32:04.542883687 +0000 UTC m=+1537.103364738" watchObservedRunningTime="2025-12-09 12:32:04.553139869 +0000 UTC m=+1537.113620920" Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.981360 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:32:04 crc kubenswrapper[4970]: I1209 12:32:04.982157 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:32:05 crc kubenswrapper[4970]: I1209 12:32:05.541855 4970 generic.go:334] "Generic (PLEG): container finished" podID="01c6f405-6870-472c-a861-86224e0c7b25" containerID="41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6" exitCode=0 Dec 09 12:32:05 crc kubenswrapper[4970]: I1209 12:32:05.541969 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerDied","Data":"41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6"} Dec 09 12:32:05 crc kubenswrapper[4970]: I1209 12:32:05.544220 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"316942aa-13a6-4e83-aff5-b4f54f43ef20","Type":"ContainerStarted","Data":"b1e52563a26dd3e177ae17b9c0c3930aa5f919dfa43aaa6e83c3cf8b41640804"} Dec 09 12:32:05 crc kubenswrapper[4970]: I1209 12:32:05.544284 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"316942aa-13a6-4e83-aff5-b4f54f43ef20","Type":"ContainerStarted","Data":"4bd4716321b671f8d31a2906c04071a71d9e57a1a789fb120810e95a3c8b9ea5"} Dec 09 12:32:06 crc kubenswrapper[4970]: I1209 12:32:06.049213 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2q7bl" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="registry-server" probeResult="failure" output=< Dec 09 12:32:06 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:32:06 crc kubenswrapper[4970]: > Dec 09 12:32:06 crc kubenswrapper[4970]: I1209 12:32:06.569390 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerStarted","Data":"5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b"} Dec 09 12:32:06 crc kubenswrapper[4970]: I1209 12:32:06.576139 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"316942aa-13a6-4e83-aff5-b4f54f43ef20","Type":"ContainerStarted","Data":"ddff804ad9e1644f5e769d66c01747a9eb4fb5575b33230d633932fefc462a20"} Dec 09 12:32:07 crc kubenswrapper[4970]: I1209 12:32:07.595294 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"316942aa-13a6-4e83-aff5-b4f54f43ef20","Type":"ContainerStarted","Data":"3b3cbe81e5cdfea2d988665a0875af2f91e95339aa387960c255e32639aa3ece"} Dec 09 12:32:07 crc kubenswrapper[4970]: I1209 12:32:07.939124 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:32:07 crc kubenswrapper[4970]: I1209 12:32:07.939817 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:32:07 crc kubenswrapper[4970]: I1209 12:32:07.946794 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:32:07 crc kubenswrapper[4970]: I1209 12:32:07.947227 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 12:32:08 crc kubenswrapper[4970]: I1209 12:32:08.613292 4970 generic.go:334] "Generic (PLEG): container finished" podID="01c6f405-6870-472c-a861-86224e0c7b25" containerID="5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b" exitCode=0 Dec 09 12:32:08 crc kubenswrapper[4970]: I1209 12:32:08.613371 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerDied","Data":"5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b"} Dec 09 12:32:08 crc kubenswrapper[4970]: I1209 12:32:08.629696 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"316942aa-13a6-4e83-aff5-b4f54f43ef20","Type":"ContainerStarted","Data":"fcf8d68f85eb53d9c4990d0a4637b5ba20e5f39b96ec2874b7734a6077be8062"} Dec 09 12:32:08 crc kubenswrapper[4970]: I1209 12:32:08.630580 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 12:32:08 crc kubenswrapper[4970]: I1209 12:32:08.656314 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 12:32:08 crc kubenswrapper[4970]: I1209 12:32:08.683179 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.46343743 podStartE2EDuration="5.683161799s" podCreationTimestamp="2025-12-09 12:32:03 +0000 UTC" firstStartedPulling="2025-12-09 12:32:04.550209721 +0000 UTC m=+1537.110690772" lastFinishedPulling="2025-12-09 12:32:07.76993409 +0000 UTC m=+1540.330415141" observedRunningTime="2025-12-09 12:32:08.67678826 +0000 UTC m=+1541.237269311" watchObservedRunningTime="2025-12-09 12:32:08.683161799 +0000 UTC m=+1541.243642850" Dec 09 12:32:09 crc kubenswrapper[4970]: I1209 12:32:09.642411 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerStarted","Data":"37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40"} Dec 09 12:32:09 crc kubenswrapper[4970]: I1209 12:32:09.674270 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zhfsk" podStartSLOduration=3.096504734 podStartE2EDuration="6.674229604s" podCreationTimestamp="2025-12-09 12:32:03 +0000 UTC" firstStartedPulling="2025-12-09 12:32:05.546705171 +0000 UTC m=+1538.107186222" lastFinishedPulling="2025-12-09 12:32:09.124430041 +0000 UTC m=+1541.684911092" observedRunningTime="2025-12-09 12:32:09.662829112 +0000 UTC m=+1542.223310173" watchObservedRunningTime="2025-12-09 12:32:09.674229604 +0000 UTC m=+1542.234710655" Dec 09 12:32:11 crc kubenswrapper[4970]: I1209 12:32:11.011821 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:32:11 crc kubenswrapper[4970]: I1209 12:32:11.012492 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 12:32:11 crc kubenswrapper[4970]: I1209 12:32:11.019894 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:32:11 crc kubenswrapper[4970]: I1209 12:32:11.671287 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 12:32:11 crc kubenswrapper[4970]: I1209 12:32:11.854886 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:32:13 crc kubenswrapper[4970]: I1209 12:32:13.732420 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:13 crc kubenswrapper[4970]: I1209 12:32:13.732773 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:14 crc kubenswrapper[4970]: I1209 12:32:14.789190 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zhfsk" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="registry-server" probeResult="failure" output=< Dec 09 12:32:14 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:32:14 crc kubenswrapper[4970]: > Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.010557 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.010808 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.010846 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.011804 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.011868 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" gracePeriod=600 Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.038701 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2q7bl" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="registry-server" probeResult="failure" output=< Dec 09 12:32:16 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:32:16 crc kubenswrapper[4970]: > Dec 09 12:32:16 crc kubenswrapper[4970]: E1209 12:32:16.143872 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.371986 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.372222 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" containerName="kube-state-metrics" containerID="cri-o://46724580f5c56460935a155d24833ca1e89fa937a361f816facc5ee27ab4aa5a" gracePeriod=30 Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.532468 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.533044 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="581d4f17-168a-463d-8de2-10e3e31a590d" containerName="mysqld-exporter" containerID="cri-o://26c5e039a23b7c1cdc956da09bd71a6bf631668f021abe0fea2f1eb90c7a5c79" gracePeriod=30 Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.737060 4970 generic.go:334] "Generic (PLEG): container finished" podID="581d4f17-168a-463d-8de2-10e3e31a590d" containerID="26c5e039a23b7c1cdc956da09bd71a6bf631668f021abe0fea2f1eb90c7a5c79" exitCode=2 Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.737197 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"581d4f17-168a-463d-8de2-10e3e31a590d","Type":"ContainerDied","Data":"26c5e039a23b7c1cdc956da09bd71a6bf631668f021abe0fea2f1eb90c7a5c79"} Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.740052 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" exitCode=0 Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.740089 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5"} Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.740117 4970 scope.go:117] "RemoveContainer" containerID="cb0f9b4763d3228bb2f722a577d4a09c1556ae0fa1f243c7931b87527311654a" Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.740975 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:32:16 crc kubenswrapper[4970]: E1209 12:32:16.741329 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.748907 4970 generic.go:334] "Generic (PLEG): container finished" podID="80219567-cf9b-45cf-9e69-21c871e190dc" containerID="46724580f5c56460935a155d24833ca1e89fa937a361f816facc5ee27ab4aa5a" exitCode=2 Dec 09 12:32:16 crc kubenswrapper[4970]: I1209 12:32:16.748950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"80219567-cf9b-45cf-9e69-21c871e190dc","Type":"ContainerDied","Data":"46724580f5c56460935a155d24833ca1e89fa937a361f816facc5ee27ab4aa5a"} Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.100239 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.118577 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.205671 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-combined-ca-bundle\") pod \"581d4f17-168a-463d-8de2-10e3e31a590d\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.205738 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq685\" (UniqueName: \"kubernetes.io/projected/581d4f17-168a-463d-8de2-10e3e31a590d-kube-api-access-xq685\") pod \"581d4f17-168a-463d-8de2-10e3e31a590d\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.205880 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpxhw\" (UniqueName: \"kubernetes.io/projected/80219567-cf9b-45cf-9e69-21c871e190dc-kube-api-access-zpxhw\") pod \"80219567-cf9b-45cf-9e69-21c871e190dc\" (UID: \"80219567-cf9b-45cf-9e69-21c871e190dc\") " Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.205949 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-config-data\") pod \"581d4f17-168a-463d-8de2-10e3e31a590d\" (UID: \"581d4f17-168a-463d-8de2-10e3e31a590d\") " Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.239064 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80219567-cf9b-45cf-9e69-21c871e190dc-kube-api-access-zpxhw" (OuterVolumeSpecName: "kube-api-access-zpxhw") pod "80219567-cf9b-45cf-9e69-21c871e190dc" (UID: "80219567-cf9b-45cf-9e69-21c871e190dc"). InnerVolumeSpecName "kube-api-access-zpxhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.239396 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581d4f17-168a-463d-8de2-10e3e31a590d-kube-api-access-xq685" (OuterVolumeSpecName: "kube-api-access-xq685") pod "581d4f17-168a-463d-8de2-10e3e31a590d" (UID: "581d4f17-168a-463d-8de2-10e3e31a590d"). InnerVolumeSpecName "kube-api-access-xq685". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.251566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "581d4f17-168a-463d-8de2-10e3e31a590d" (UID: "581d4f17-168a-463d-8de2-10e3e31a590d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.309805 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpxhw\" (UniqueName: \"kubernetes.io/projected/80219567-cf9b-45cf-9e69-21c871e190dc-kube-api-access-zpxhw\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.309837 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.309858 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq685\" (UniqueName: \"kubernetes.io/projected/581d4f17-168a-463d-8de2-10e3e31a590d-kube-api-access-xq685\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.326679 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-config-data" (OuterVolumeSpecName: "config-data") pod "581d4f17-168a-463d-8de2-10e3e31a590d" (UID: "581d4f17-168a-463d-8de2-10e3e31a590d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.411710 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/581d4f17-168a-463d-8de2-10e3e31a590d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.765833 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"80219567-cf9b-45cf-9e69-21c871e190dc","Type":"ContainerDied","Data":"be02f8b772ae7039cbe961e7457021681a0408aec3dd0a3c35d35181263b950b"} Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.765884 4970 scope.go:117] "RemoveContainer" containerID="46724580f5c56460935a155d24833ca1e89fa937a361f816facc5ee27ab4aa5a" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.766022 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.771373 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"581d4f17-168a-463d-8de2-10e3e31a590d","Type":"ContainerDied","Data":"b64fb2046325da78976b710058ecf5c27f89aa0f18e47f86bb5dcb0cccc4e126"} Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.771454 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.805020 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.825203 4970 scope.go:117] "RemoveContainer" containerID="26c5e039a23b7c1cdc956da09bd71a6bf631668f021abe0fea2f1eb90c7a5c79" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.837825 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.865157 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.895306 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.908440 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: E1209 12:32:17.909555 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" containerName="kube-state-metrics" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.910683 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" containerName="kube-state-metrics" Dec 09 12:32:17 crc kubenswrapper[4970]: E1209 12:32:17.911004 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581d4f17-168a-463d-8de2-10e3e31a590d" containerName="mysqld-exporter" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.911016 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="581d4f17-168a-463d-8de2-10e3e31a590d" containerName="mysqld-exporter" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.911548 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" containerName="kube-state-metrics" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.911581 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="581d4f17-168a-463d-8de2-10e3e31a590d" containerName="mysqld-exporter" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.912658 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.915265 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.920560 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.922304 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.926512 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.928070 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.928157 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.928582 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5xv\" (UniqueName: \"kubernetes.io/projected/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-api-access-zd5xv\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.938146 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.944226 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.947500 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.952642 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Dec 09 12:32:17 crc kubenswrapper[4970]: I1209 12:32:17.952805 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031037 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5xv\" (UniqueName: \"kubernetes.io/projected/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-api-access-zd5xv\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031141 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031317 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-config-data\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031410 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031449 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031484 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.031536 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56wfz\" (UniqueName: \"kubernetes.io/projected/2e49c009-8457-419f-aca0-a0288e55ec6d-kube-api-access-56wfz\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.039019 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.045522 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.046220 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.052312 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5xv\" (UniqueName: \"kubernetes.io/projected/e57cb0dc-fe18-46df-8d56-61ac26bed69d-kube-api-access-zd5xv\") pod \"kube-state-metrics-0\" (UID: \"e57cb0dc-fe18-46df-8d56-61ac26bed69d\") " pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.132972 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.133093 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-config-data\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.133192 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.133232 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56wfz\" (UniqueName: \"kubernetes.io/projected/2e49c009-8457-419f-aca0-a0288e55ec6d-kube-api-access-56wfz\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.137837 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-config-data\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.137844 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.138536 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/2e49c009-8457-419f-aca0-a0288e55ec6d-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.153408 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56wfz\" (UniqueName: \"kubernetes.io/projected/2e49c009-8457-419f-aca0-a0288e55ec6d-kube-api-access-56wfz\") pod \"mysqld-exporter-0\" (UID: \"2e49c009-8457-419f-aca0-a0288e55ec6d\") " pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.245424 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.267745 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.820608 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.852634 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.898844 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.899119 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-central-agent" containerID="cri-o://1a09ca83293350dbbb6948ab9b2205a7b7f94793cfa6a3792656860bc8345bb9" gracePeriod=30 Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.899329 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="proxy-httpd" containerID="cri-o://0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991" gracePeriod=30 Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.899492 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="sg-core" containerID="cri-o://2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d" gracePeriod=30 Dec 09 12:32:18 crc kubenswrapper[4970]: I1209 12:32:18.899537 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-notification-agent" containerID="cri-o://35700f5e0591bcf4491ae3ed28499f0cca93c3bfa6b73e0a50706bd68eb02080" gracePeriod=30 Dec 09 12:32:19 crc kubenswrapper[4970]: E1209 12:32:19.161986 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6fc532_3039_4d07_9c30_c4466de25b41.slice/crio-conmon-2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6fc532_3039_4d07_9c30_c4466de25b41.slice/crio-0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6fc532_3039_4d07_9c30_c4466de25b41.slice/crio-conmon-0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef6fc532_3039_4d07_9c30_c4466de25b41.slice/crio-2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d.scope\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.831151 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581d4f17-168a-463d-8de2-10e3e31a590d" path="/var/lib/kubelet/pods/581d4f17-168a-463d-8de2-10e3e31a590d/volumes" Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.832332 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80219567-cf9b-45cf-9e69-21c871e190dc" path="/var/lib/kubelet/pods/80219567-cf9b-45cf-9e69-21c871e190dc/volumes" Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.834513 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e57cb0dc-fe18-46df-8d56-61ac26bed69d","Type":"ContainerStarted","Data":"909b0d58e5d3dd4aab1b37d65874cce5e14fc07cd7848bcd9cfb4233b35e5c74"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.834553 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e57cb0dc-fe18-46df-8d56-61ac26bed69d","Type":"ContainerStarted","Data":"1f9e74855234c830c10a6cd04a58eeb65f4fc9cccfb748bd999c9c2465a8f847"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.834599 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.839442 4970 generic.go:334] "Generic (PLEG): container finished" podID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerID="0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991" exitCode=0 Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.839469 4970 generic.go:334] "Generic (PLEG): container finished" podID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerID="2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d" exitCode=2 Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.839476 4970 generic.go:334] "Generic (PLEG): container finished" podID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerID="1a09ca83293350dbbb6948ab9b2205a7b7f94793cfa6a3792656860bc8345bb9" exitCode=0 Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.839527 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerDied","Data":"0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.839571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerDied","Data":"2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.839588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerDied","Data":"1a09ca83293350dbbb6948ab9b2205a7b7f94793cfa6a3792656860bc8345bb9"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.841209 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2e49c009-8457-419f-aca0-a0288e55ec6d","Type":"ContainerStarted","Data":"a533cce8791d81114fb48194f6b5c8eb4a224d18d53bac5cf52b1249e0d9c11d"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.841234 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"2e49c009-8457-419f-aca0-a0288e55ec6d","Type":"ContainerStarted","Data":"8738d06a6c5a346a267e1e6c479e8d03dcacc5aeaa831665301dcb8994eebe13"} Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.870630 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.342060168 podStartE2EDuration="2.870608837s" podCreationTimestamp="2025-12-09 12:32:17 +0000 UTC" firstStartedPulling="2025-12-09 12:32:18.821128691 +0000 UTC m=+1551.381609742" lastFinishedPulling="2025-12-09 12:32:19.34967736 +0000 UTC m=+1551.910158411" observedRunningTime="2025-12-09 12:32:19.851570161 +0000 UTC m=+1552.412051212" watchObservedRunningTime="2025-12-09 12:32:19.870608837 +0000 UTC m=+1552.431089888" Dec 09 12:32:19 crc kubenswrapper[4970]: I1209 12:32:19.880332 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.319048366 podStartE2EDuration="2.880314824s" podCreationTimestamp="2025-12-09 12:32:17 +0000 UTC" firstStartedPulling="2025-12-09 12:32:18.814363831 +0000 UTC m=+1551.374844882" lastFinishedPulling="2025-12-09 12:32:19.375630289 +0000 UTC m=+1551.936111340" observedRunningTime="2025-12-09 12:32:19.86583612 +0000 UTC m=+1552.426317171" watchObservedRunningTime="2025-12-09 12:32:19.880314824 +0000 UTC m=+1552.440795875" Dec 09 12:32:20 crc kubenswrapper[4970]: I1209 12:32:20.858614 4970 generic.go:334] "Generic (PLEG): container finished" podID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerID="35700f5e0591bcf4491ae3ed28499f0cca93c3bfa6b73e0a50706bd68eb02080" exitCode=0 Dec 09 12:32:20 crc kubenswrapper[4970]: I1209 12:32:20.858785 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerDied","Data":"35700f5e0591bcf4491ae3ed28499f0cca93c3bfa6b73e0a50706bd68eb02080"} Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.182646 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206475 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-sg-core-conf-yaml\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206558 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-scripts\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206583 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-combined-ca-bundle\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206625 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-log-httpd\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206672 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dx8\" (UniqueName: \"kubernetes.io/projected/ef6fc532-3039-4d07-9c30-c4466de25b41-kube-api-access-59dx8\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206741 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-config-data\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.206935 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-run-httpd\") pod \"ef6fc532-3039-4d07-9c30-c4466de25b41\" (UID: \"ef6fc532-3039-4d07-9c30-c4466de25b41\") " Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.207142 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.207154 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.207723 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.207741 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ef6fc532-3039-4d07-9c30-c4466de25b41-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.217446 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-scripts" (OuterVolumeSpecName: "scripts") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.258569 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef6fc532-3039-4d07-9c30-c4466de25b41-kube-api-access-59dx8" (OuterVolumeSpecName: "kube-api-access-59dx8") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "kube-api-access-59dx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.297500 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.312994 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.313038 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.313049 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59dx8\" (UniqueName: \"kubernetes.io/projected/ef6fc532-3039-4d07-9c30-c4466de25b41-kube-api-access-59dx8\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.373588 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.412733 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-config-data" (OuterVolumeSpecName: "config-data") pod "ef6fc532-3039-4d07-9c30-c4466de25b41" (UID: "ef6fc532-3039-4d07-9c30-c4466de25b41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.415756 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.415793 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef6fc532-3039-4d07-9c30-c4466de25b41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.878699 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ef6fc532-3039-4d07-9c30-c4466de25b41","Type":"ContainerDied","Data":"66fdd374ad9774c1a74dec6b04ee312d5d1a6faf8dda4f63dddf92a560859331"} Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.879024 4970 scope.go:117] "RemoveContainer" containerID="0eb9c69588661801a3a4c37765ea266629d753154e8f5ad224a582e439e06991" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.878958 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.910599 4970 scope.go:117] "RemoveContainer" containerID="2d4a3b614a59ae19278260adb6c8af8d6848f342e7a3bfe1ac9fcacf07604d0d" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.919184 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.939106 4970 scope.go:117] "RemoveContainer" containerID="35700f5e0591bcf4491ae3ed28499f0cca93c3bfa6b73e0a50706bd68eb02080" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.940361 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.962964 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:21 crc kubenswrapper[4970]: E1209 12:32:21.963475 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-central-agent" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963494 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-central-agent" Dec 09 12:32:21 crc kubenswrapper[4970]: E1209 12:32:21.963503 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-notification-agent" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963510 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-notification-agent" Dec 09 12:32:21 crc kubenswrapper[4970]: E1209 12:32:21.963542 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="proxy-httpd" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963548 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="proxy-httpd" Dec 09 12:32:21 crc kubenswrapper[4970]: E1209 12:32:21.963555 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="sg-core" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963561 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="sg-core" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963769 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="sg-core" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963791 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="proxy-httpd" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963804 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-central-agent" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.963823 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" containerName="ceilometer-notification-agent" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.965828 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.969524 4970 scope.go:117] "RemoveContainer" containerID="1a09ca83293350dbbb6948ab9b2205a7b7f94793cfa6a3792656860bc8345bb9" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.970507 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.970701 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.970780 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:32:21 crc kubenswrapper[4970]: I1209 12:32:21.976540 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032124 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032163 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032230 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-run-httpd\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032320 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-config-data\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032385 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmm7w\" (UniqueName: \"kubernetes.io/projected/98adf73d-a280-493a-826b-7f947d36f925-kube-api-access-hmm7w\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032440 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-scripts\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032466 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.032486 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-log-httpd\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.134822 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-scripts\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.134888 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.134923 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-log-httpd\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135016 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135037 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135071 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-run-httpd\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135141 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-config-data\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135227 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmm7w\" (UniqueName: \"kubernetes.io/projected/98adf73d-a280-493a-826b-7f947d36f925-kube-api-access-hmm7w\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135512 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-log-httpd\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.135928 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-run-httpd\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.139543 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.140934 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.141831 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-config-data\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.142096 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.143383 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-scripts\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.154157 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmm7w\" (UniqueName: \"kubernetes.io/projected/98adf73d-a280-493a-826b-7f947d36f925-kube-api-access-hmm7w\") pod \"ceilometer-0\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.284604 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.761480 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:22 crc kubenswrapper[4970]: I1209 12:32:22.893876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerStarted","Data":"050a52f0fd08722948ee3502b4abb56c31ece4da66d7d2de0cd8dfc341c5017b"} Dec 09 12:32:23 crc kubenswrapper[4970]: I1209 12:32:23.796219 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:23 crc kubenswrapper[4970]: I1209 12:32:23.832749 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef6fc532-3039-4d07-9c30-c4466de25b41" path="/var/lib/kubelet/pods/ef6fc532-3039-4d07-9c30-c4466de25b41/volumes" Dec 09 12:32:23 crc kubenswrapper[4970]: I1209 12:32:23.848347 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:23 crc kubenswrapper[4970]: I1209 12:32:23.909117 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerStarted","Data":"4071e5d5bc01f7d120d8b8ee07a61a987f2ed84624696e9711937084d42111cd"} Dec 09 12:32:24 crc kubenswrapper[4970]: I1209 12:32:24.035156 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zhfsk"] Dec 09 12:32:24 crc kubenswrapper[4970]: I1209 12:32:24.922498 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zhfsk" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="registry-server" containerID="cri-o://37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40" gracePeriod=2 Dec 09 12:32:24 crc kubenswrapper[4970]: I1209 12:32:24.922950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerStarted","Data":"612c9f961148a81ef596ae33eb71f76685aff5d10f13d91d972463183fdf9d02"} Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.041602 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.100815 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.410657 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.521264 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-utilities\") pod \"01c6f405-6870-472c-a861-86224e0c7b25\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.521320 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-catalog-content\") pod \"01c6f405-6870-472c-a861-86224e0c7b25\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.521499 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cbmz\" (UniqueName: \"kubernetes.io/projected/01c6f405-6870-472c-a861-86224e0c7b25-kube-api-access-2cbmz\") pod \"01c6f405-6870-472c-a861-86224e0c7b25\" (UID: \"01c6f405-6870-472c-a861-86224e0c7b25\") " Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.521807 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-utilities" (OuterVolumeSpecName: "utilities") pod "01c6f405-6870-472c-a861-86224e0c7b25" (UID: "01c6f405-6870-472c-a861-86224e0c7b25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.522558 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.526467 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c6f405-6870-472c-a861-86224e0c7b25-kube-api-access-2cbmz" (OuterVolumeSpecName: "kube-api-access-2cbmz") pod "01c6f405-6870-472c-a861-86224e0c7b25" (UID: "01c6f405-6870-472c-a861-86224e0c7b25"). InnerVolumeSpecName "kube-api-access-2cbmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.575466 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "01c6f405-6870-472c-a861-86224e0c7b25" (UID: "01c6f405-6870-472c-a861-86224e0c7b25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.625108 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01c6f405-6870-472c-a861-86224e0c7b25-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.625435 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cbmz\" (UniqueName: \"kubernetes.io/projected/01c6f405-6870-472c-a861-86224e0c7b25-kube-api-access-2cbmz\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.935280 4970 generic.go:334] "Generic (PLEG): container finished" podID="01c6f405-6870-472c-a861-86224e0c7b25" containerID="37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40" exitCode=0 Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.935354 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhfsk" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.935394 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerDied","Data":"37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40"} Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.935465 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhfsk" event={"ID":"01c6f405-6870-472c-a861-86224e0c7b25","Type":"ContainerDied","Data":"78bcfe73c3b3d95efcafb104a7b4cb8f0b01d356570d25e21b76fe3bd2c847d5"} Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.935492 4970 scope.go:117] "RemoveContainer" containerID="37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.938133 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerStarted","Data":"74fab4c48b118e7a5365997a0b4a9d9e9370b11b0e15de63987a2ce9809eea95"} Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.966064 4970 scope.go:117] "RemoveContainer" containerID="5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b" Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.980004 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zhfsk"] Dec 09 12:32:25 crc kubenswrapper[4970]: I1209 12:32:25.993834 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zhfsk"] Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.007870 4970 scope.go:117] "RemoveContainer" containerID="41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.053513 4970 scope.go:117] "RemoveContainer" containerID="37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40" Dec 09 12:32:26 crc kubenswrapper[4970]: E1209 12:32:26.054059 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40\": container with ID starting with 37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40 not found: ID does not exist" containerID="37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.054108 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40"} err="failed to get container status \"37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40\": rpc error: code = NotFound desc = could not find container \"37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40\": container with ID starting with 37b457a84f4417fa358106d4299f790363b904f7bd2e0eb34de6c8dd23494a40 not found: ID does not exist" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.054134 4970 scope.go:117] "RemoveContainer" containerID="5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b" Dec 09 12:32:26 crc kubenswrapper[4970]: E1209 12:32:26.054402 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b\": container with ID starting with 5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b not found: ID does not exist" containerID="5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.054429 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b"} err="failed to get container status \"5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b\": rpc error: code = NotFound desc = could not find container \"5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b\": container with ID starting with 5ee9113b3f154ca98e4796aeeb3d562bfa261794cc1f90b3cbfb8f1977fa1f3b not found: ID does not exist" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.054449 4970 scope.go:117] "RemoveContainer" containerID="41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6" Dec 09 12:32:26 crc kubenswrapper[4970]: E1209 12:32:26.054775 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6\": container with ID starting with 41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6 not found: ID does not exist" containerID="41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.054805 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6"} err="failed to get container status \"41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6\": rpc error: code = NotFound desc = could not find container \"41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6\": container with ID starting with 41f6cdc74f9436b632d67160a707bd7f4c7b398e933b2a11b8c6c9e7937f02f6 not found: ID does not exist" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.832504 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-kg6zl"] Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.849071 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-kg6zl"] Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.937097 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-snjvx"] Dec 09 12:32:26 crc kubenswrapper[4970]: E1209 12:32:26.937751 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="extract-utilities" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.937769 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="extract-utilities" Dec 09 12:32:26 crc kubenswrapper[4970]: E1209 12:32:26.937805 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="registry-server" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.937815 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="registry-server" Dec 09 12:32:26 crc kubenswrapper[4970]: E1209 12:32:26.937840 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="extract-content" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.937848 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="extract-content" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.938113 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c6f405-6870-472c-a861-86224e0c7b25" containerName="registry-server" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.939180 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.952471 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gl2\" (UniqueName: \"kubernetes.io/projected/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-kube-api-access-w5gl2\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.952571 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-config-data\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.952601 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-combined-ca-bundle\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:26 crc kubenswrapper[4970]: I1209 12:32:26.977208 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-snjvx"] Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.049872 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q7bl"] Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.050110 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2q7bl" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="registry-server" containerID="cri-o://bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498" gracePeriod=2 Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.055579 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5gl2\" (UniqueName: \"kubernetes.io/projected/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-kube-api-access-w5gl2\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.055657 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-config-data\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.055691 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-combined-ca-bundle\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.073809 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-config-data\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.074421 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-combined-ca-bundle\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.084874 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5gl2\" (UniqueName: \"kubernetes.io/projected/cda5b8c0-51ad-4a63-bf1d-e3546f098ad3-kube-api-access-w5gl2\") pod \"heat-db-sync-snjvx\" (UID: \"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3\") " pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.355467 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-snjvx" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.545381 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.668408 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-catalog-content\") pod \"56ac190c-b9e3-4453-9581-58141f4f59cc\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.668478 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqsh6\" (UniqueName: \"kubernetes.io/projected/56ac190c-b9e3-4453-9581-58141f4f59cc-kube-api-access-zqsh6\") pod \"56ac190c-b9e3-4453-9581-58141f4f59cc\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.668583 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-utilities\") pod \"56ac190c-b9e3-4453-9581-58141f4f59cc\" (UID: \"56ac190c-b9e3-4453-9581-58141f4f59cc\") " Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.669625 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-utilities" (OuterVolumeSpecName: "utilities") pod "56ac190c-b9e3-4453-9581-58141f4f59cc" (UID: "56ac190c-b9e3-4453-9581-58141f4f59cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.674620 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56ac190c-b9e3-4453-9581-58141f4f59cc-kube-api-access-zqsh6" (OuterVolumeSpecName: "kube-api-access-zqsh6") pod "56ac190c-b9e3-4453-9581-58141f4f59cc" (UID: "56ac190c-b9e3-4453-9581-58141f4f59cc"). InnerVolumeSpecName "kube-api-access-zqsh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.776228 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.776570 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqsh6\" (UniqueName: \"kubernetes.io/projected/56ac190c-b9e3-4453-9581-58141f4f59cc-kube-api-access-zqsh6\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.795788 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56ac190c-b9e3-4453-9581-58141f4f59cc" (UID: "56ac190c-b9e3-4453-9581-58141f4f59cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.830703 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c6f405-6870-472c-a861-86224e0c7b25" path="/var/lib/kubelet/pods/01c6f405-6870-472c-a861-86224e0c7b25/volumes" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.831681 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3015f85e-5d86-4906-9d9a-8389330bcb82" path="/var/lib/kubelet/pods/3015f85e-5d86-4906-9d9a-8389330bcb82/volumes" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.878787 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56ac190c-b9e3-4453-9581-58141f4f59cc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.903720 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-snjvx"] Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.974323 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-snjvx" event={"ID":"cda5b8c0-51ad-4a63-bf1d-e3546f098ad3","Type":"ContainerStarted","Data":"01aa6de71c7653c261bbeb22b5ac4ddd81b12c620340319d028a3d95ceef17ba"} Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.976844 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerStarted","Data":"7773e5ef9c06063c29f5d2c7802f191eca7f6b80b4653350f3fec7712626ca43"} Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.977012 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.979716 4970 generic.go:334] "Generic (PLEG): container finished" podID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerID="bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498" exitCode=0 Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.979756 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerDied","Data":"bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498"} Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.979783 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q7bl" event={"ID":"56ac190c-b9e3-4453-9581-58141f4f59cc","Type":"ContainerDied","Data":"7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864"} Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.979805 4970 scope.go:117] "RemoveContainer" containerID="bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498" Dec 09 12:32:27 crc kubenswrapper[4970]: I1209 12:32:27.980021 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q7bl" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.011537 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.011588 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.011692 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.014490 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.858487883 podStartE2EDuration="7.014469872s" podCreationTimestamp="2025-12-09 12:32:21 +0000 UTC" firstStartedPulling="2025-12-09 12:32:22.767169107 +0000 UTC m=+1555.327650158" lastFinishedPulling="2025-12-09 12:32:26.923151086 +0000 UTC m=+1559.483632147" observedRunningTime="2025-12-09 12:32:28.000407799 +0000 UTC m=+1560.560888870" watchObservedRunningTime="2025-12-09 12:32:28.014469872 +0000 UTC m=+1560.574950923" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.014894 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.032856 4970 scope.go:117] "RemoveContainer" containerID="618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.042222 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2q7bl"] Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.053918 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2q7bl"] Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.066950 4970 scope.go:117] "RemoveContainer" containerID="836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.106397 4970 scope.go:117] "RemoveContainer" containerID="bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.106896 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498\": container with ID starting with bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498 not found: ID does not exist" containerID="bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.106989 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498"} err="failed to get container status \"bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498\": rpc error: code = NotFound desc = could not find container \"bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498\": container with ID starting with bd1c896170de733dfa150f69420a4458bddfe753cb935cf906e0d65bb72c4498 not found: ID does not exist" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.107061 4970 scope.go:117] "RemoveContainer" containerID="618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.107588 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10\": container with ID starting with 618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10 not found: ID does not exist" containerID="618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.107665 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10"} err="failed to get container status \"618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10\": rpc error: code = NotFound desc = could not find container \"618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10\": container with ID starting with 618fb26f4529f82b275ee8dc38a65f7108ae72c29a7560f7c7ab61a4f877dc10 not found: ID does not exist" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.107744 4970 scope.go:117] "RemoveContainer" containerID="836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.108111 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9\": container with ID starting with 836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9 not found: ID does not exist" containerID="836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.108171 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9"} err="failed to get container status \"836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9\": rpc error: code = NotFound desc = could not find container \"836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9\": container with ID starting with 836b124f06d934743643bb576f1ca2f500a57b392fedc81d38b6abfc6ce3e2f9 not found: ID does not exist" Dec 09 12:32:28 crc kubenswrapper[4970]: I1209 12:32:28.262542 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 12:32:28 crc kubenswrapper[4970]: E1209 12:32:28.995504 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:32:29 crc kubenswrapper[4970]: E1209 12:32:29.220195 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:29 crc kubenswrapper[4970]: I1209 12:32:29.300891 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:29 crc kubenswrapper[4970]: I1209 12:32:29.828300 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" path="/var/lib/kubelet/pods/56ac190c-b9e3-4453-9581-58141f4f59cc/volumes" Dec 09 12:32:29 crc kubenswrapper[4970]: I1209 12:32:29.868367 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:30 crc kubenswrapper[4970]: I1209 12:32:30.004113 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-central-agent" containerID="cri-o://4071e5d5bc01f7d120d8b8ee07a61a987f2ed84624696e9711937084d42111cd" gracePeriod=30 Dec 09 12:32:30 crc kubenswrapper[4970]: I1209 12:32:30.004664 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="proxy-httpd" containerID="cri-o://7773e5ef9c06063c29f5d2c7802f191eca7f6b80b4653350f3fec7712626ca43" gracePeriod=30 Dec 09 12:32:30 crc kubenswrapper[4970]: I1209 12:32:30.004713 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="sg-core" containerID="cri-o://74fab4c48b118e7a5365997a0b4a9d9e9370b11b0e15de63987a2ce9809eea95" gracePeriod=30 Dec 09 12:32:30 crc kubenswrapper[4970]: I1209 12:32:30.004744 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-notification-agent" containerID="cri-o://612c9f961148a81ef596ae33eb71f76685aff5d10f13d91d972463183fdf9d02" gracePeriod=30 Dec 09 12:32:30 crc kubenswrapper[4970]: I1209 12:32:30.437303 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.017834 4970 generic.go:334] "Generic (PLEG): container finished" podID="98adf73d-a280-493a-826b-7f947d36f925" containerID="7773e5ef9c06063c29f5d2c7802f191eca7f6b80b4653350f3fec7712626ca43" exitCode=0 Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.018652 4970 generic.go:334] "Generic (PLEG): container finished" podID="98adf73d-a280-493a-826b-7f947d36f925" containerID="74fab4c48b118e7a5365997a0b4a9d9e9370b11b0e15de63987a2ce9809eea95" exitCode=2 Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.018733 4970 generic.go:334] "Generic (PLEG): container finished" podID="98adf73d-a280-493a-826b-7f947d36f925" containerID="612c9f961148a81ef596ae33eb71f76685aff5d10f13d91d972463183fdf9d02" exitCode=0 Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.018795 4970 generic.go:334] "Generic (PLEG): container finished" podID="98adf73d-a280-493a-826b-7f947d36f925" containerID="4071e5d5bc01f7d120d8b8ee07a61a987f2ed84624696e9711937084d42111cd" exitCode=0 Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.017914 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerDied","Data":"7773e5ef9c06063c29f5d2c7802f191eca7f6b80b4653350f3fec7712626ca43"} Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.018936 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerDied","Data":"74fab4c48b118e7a5365997a0b4a9d9e9370b11b0e15de63987a2ce9809eea95"} Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.018997 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerDied","Data":"612c9f961148a81ef596ae33eb71f76685aff5d10f13d91d972463183fdf9d02"} Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.019054 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerDied","Data":"4071e5d5bc01f7d120d8b8ee07a61a987f2ed84624696e9711937084d42111cd"} Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.553284 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.575714 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-log-httpd\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.575794 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-ceilometer-tls-certs\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.575833 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-run-httpd\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.575895 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-sg-core-conf-yaml\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.575952 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-config-data\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.576010 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmm7w\" (UniqueName: \"kubernetes.io/projected/98adf73d-a280-493a-826b-7f947d36f925-kube-api-access-hmm7w\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.576203 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-scripts\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.576257 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-combined-ca-bundle\") pod \"98adf73d-a280-493a-826b-7f947d36f925\" (UID: \"98adf73d-a280-493a-826b-7f947d36f925\") " Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.576757 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.576847 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.583432 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-scripts" (OuterVolumeSpecName: "scripts") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.627075 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.627701 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98adf73d-a280-493a-826b-7f947d36f925-kube-api-access-hmm7w" (OuterVolumeSpecName: "kube-api-access-hmm7w") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "kube-api-access-hmm7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.682227 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.682279 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/98adf73d-a280-493a-826b-7f947d36f925-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.682291 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.682304 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmm7w\" (UniqueName: \"kubernetes.io/projected/98adf73d-a280-493a-826b-7f947d36f925-kube-api-access-hmm7w\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.682316 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.704778 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.735867 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.782957 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-config-data" (OuterVolumeSpecName: "config-data") pod "98adf73d-a280-493a-826b-7f947d36f925" (UID: "98adf73d-a280-493a-826b-7f947d36f925"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.785169 4970 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.785200 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.785208 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98adf73d-a280-493a-826b-7f947d36f925-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:31 crc kubenswrapper[4970]: I1209 12:32:31.813453 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:32:31 crc kubenswrapper[4970]: E1209 12:32:31.813899 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.034413 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"98adf73d-a280-493a-826b-7f947d36f925","Type":"ContainerDied","Data":"050a52f0fd08722948ee3502b4abb56c31ece4da66d7d2de0cd8dfc341c5017b"} Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.034488 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.034506 4970 scope.go:117] "RemoveContainer" containerID="7773e5ef9c06063c29f5d2c7802f191eca7f6b80b4653350f3fec7712626ca43" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.058893 4970 scope.go:117] "RemoveContainer" containerID="74fab4c48b118e7a5365997a0b4a9d9e9370b11b0e15de63987a2ce9809eea95" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.081286 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.097175 4970 scope.go:117] "RemoveContainer" containerID="612c9f961148a81ef596ae33eb71f76685aff5d10f13d91d972463183fdf9d02" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.142416 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.148743 4970 scope.go:117] "RemoveContainer" containerID="4071e5d5bc01f7d120d8b8ee07a61a987f2ed84624696e9711937084d42111cd" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.157236 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.157830 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="extract-content" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.157854 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="extract-content" Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.157887 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="extract-utilities" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.157897 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="extract-utilities" Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.157915 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-notification-agent" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.157931 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-notification-agent" Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.157951 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-central-agent" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.157960 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-central-agent" Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.157976 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="proxy-httpd" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.157984 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="proxy-httpd" Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.158011 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="sg-core" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158019 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="sg-core" Dec 09 12:32:32 crc kubenswrapper[4970]: E1209 12:32:32.158042 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="registry-server" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158049 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="registry-server" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158326 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-notification-agent" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158348 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="56ac190c-b9e3-4453-9581-58141f4f59cc" containerName="registry-server" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158358 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="proxy-httpd" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158394 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="ceilometer-central-agent" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.158408 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="98adf73d-a280-493a-826b-7f947d36f925" containerName="sg-core" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.162629 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.165416 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.165637 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.167509 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.171801 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.307645 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308000 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea52f6b9-599e-4ac5-94c6-79949c705be8-log-httpd\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308222 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-scripts\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308314 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308373 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-config-data\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308402 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqqs9\" (UniqueName: \"kubernetes.io/projected/ea52f6b9-599e-4ac5-94c6-79949c705be8-kube-api-access-wqqs9\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.308448 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea52f6b9-599e-4ac5-94c6-79949c705be8-run-httpd\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411095 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea52f6b9-599e-4ac5-94c6-79949c705be8-log-httpd\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411161 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411241 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-scripts\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411299 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411357 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-config-data\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411392 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqqs9\" (UniqueName: \"kubernetes.io/projected/ea52f6b9-599e-4ac5-94c6-79949c705be8-kube-api-access-wqqs9\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411639 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea52f6b9-599e-4ac5-94c6-79949c705be8-log-httpd\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411445 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea52f6b9-599e-4ac5-94c6-79949c705be8-run-httpd\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411864 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ea52f6b9-599e-4ac5-94c6-79949c705be8-run-httpd\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.411944 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.415855 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.416402 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-scripts\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.420333 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.420957 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.422526 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea52f6b9-599e-4ac5-94c6-79949c705be8-config-data\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.427768 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqqs9\" (UniqueName: \"kubernetes.io/projected/ea52f6b9-599e-4ac5-94c6-79949c705be8-kube-api-access-wqqs9\") pod \"ceilometer-0\" (UID: \"ea52f6b9-599e-4ac5-94c6-79949c705be8\") " pod="openstack/ceilometer-0" Dec 09 12:32:32 crc kubenswrapper[4970]: I1209 12:32:32.498390 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 12:32:33 crc kubenswrapper[4970]: W1209 12:32:33.078581 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea52f6b9_599e_4ac5_94c6_79949c705be8.slice/crio-f9b35f2d27980d7570d9d5e2f3b55e95870a88dc4d73664f83688f058ff1b965 WatchSource:0}: Error finding container f9b35f2d27980d7570d9d5e2f3b55e95870a88dc4d73664f83688f058ff1b965: Status 404 returned error can't find the container with id f9b35f2d27980d7570d9d5e2f3b55e95870a88dc4d73664f83688f058ff1b965 Dec 09 12:32:33 crc kubenswrapper[4970]: I1209 12:32:33.087171 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 12:32:33 crc kubenswrapper[4970]: E1209 12:32:33.203879 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:32:33 crc kubenswrapper[4970]: E1209 12:32:33.203945 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:32:33 crc kubenswrapper[4970]: E1209 12:32:33.204095 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:33 crc kubenswrapper[4970]: I1209 12:32:33.824521 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98adf73d-a280-493a-826b-7f947d36f925" path="/var/lib/kubelet/pods/98adf73d-a280-493a-826b-7f947d36f925/volumes" Dec 09 12:32:34 crc kubenswrapper[4970]: I1209 12:32:34.063797 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea52f6b9-599e-4ac5-94c6-79949c705be8","Type":"ContainerStarted","Data":"f9b35f2d27980d7570d9d5e2f3b55e95870a88dc4d73664f83688f058ff1b965"} Dec 09 12:32:34 crc kubenswrapper[4970]: I1209 12:32:34.360838 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerName="rabbitmq" containerID="cri-o://5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c" gracePeriod=604795 Dec 09 12:32:35 crc kubenswrapper[4970]: I1209 12:32:35.076588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea52f6b9-599e-4ac5-94c6-79949c705be8","Type":"ContainerStarted","Data":"11620a699ef9be9f1b0566e6ace36bbc339561903c3d4e26d52f372b5865b77a"} Dec 09 12:32:35 crc kubenswrapper[4970]: I1209 12:32:35.076924 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea52f6b9-599e-4ac5-94c6-79949c705be8","Type":"ContainerStarted","Data":"cba404e251313cff0a159e37e151afb4810db8dfc5aff9663f46afa40c2232fc"} Dec 09 12:32:35 crc kubenswrapper[4970]: I1209 12:32:35.162599 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerName="rabbitmq" containerID="cri-o://4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728" gracePeriod=604796 Dec 09 12:32:36 crc kubenswrapper[4970]: E1209 12:32:36.162173 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:32:37 crc kubenswrapper[4970]: I1209 12:32:37.129226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ea52f6b9-599e-4ac5-94c6-79949c705be8","Type":"ContainerStarted","Data":"bd68cfbb04d6e4c7bdc40632254af3a9cd2ff76d26e71d97b808ed37742c3188"} Dec 09 12:32:37 crc kubenswrapper[4970]: I1209 12:32:37.129599 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 12:32:37 crc kubenswrapper[4970]: E1209 12:32:37.132561 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:32:38 crc kubenswrapper[4970]: E1209 12:32:38.143311 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:32:39 crc kubenswrapper[4970]: E1209 12:32:39.550405 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:39 crc kubenswrapper[4970]: E1209 12:32:39.937517 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:32:39 crc kubenswrapper[4970]: E1209 12:32:39.937585 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:32:39 crc kubenswrapper[4970]: E1209 12:32:39.937722 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:39 crc kubenswrapper[4970]: E1209 12:32:39.939219 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.108189 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.185322 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.185352 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b822b3c-bdfc-4766-b56f-14696c6b34a0","Type":"ContainerDied","Data":"5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c"} Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.185391 4970 generic.go:334] "Generic (PLEG): container finished" podID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerID="5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c" exitCode=0 Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.185411 4970 scope.go:117] "RemoveContainer" containerID="5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.185431 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b822b3c-bdfc-4766-b56f-14696c6b34a0","Type":"ContainerDied","Data":"bca94329e2451ccc8eb385167f51163925c3ca393fa1de9d60a5c5099bd3ba7b"} Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.215874 4970 scope.go:117] "RemoveContainer" containerID="e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.245695 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b822b3c-bdfc-4766-b56f-14696c6b34a0-erlang-cookie-secret\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.245794 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-erlang-cookie\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.245835 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-plugins-conf\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.245933 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-tls\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246017 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b822b3c-bdfc-4766-b56f-14696c6b34a0-pod-info\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246050 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-server-conf\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246097 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-confd\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246141 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246179 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-plugins\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246263 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ltf6\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-kube-api-access-5ltf6\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.246299 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-config-data\") pod \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\" (UID: \"7b822b3c-bdfc-4766-b56f-14696c6b34a0\") " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.248346 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.250569 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.251214 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.252556 4970 scope.go:117] "RemoveContainer" containerID="5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.256610 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7b822b3c-bdfc-4766-b56f-14696c6b34a0-pod-info" (OuterVolumeSpecName: "pod-info") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.258596 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b822b3c-bdfc-4766-b56f-14696c6b34a0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.258644 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-kube-api-access-5ltf6" (OuterVolumeSpecName: "kube-api-access-5ltf6") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "kube-api-access-5ltf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: E1209 12:32:41.262553 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c\": container with ID starting with 5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c not found: ID does not exist" containerID="5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.262611 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c"} err="failed to get container status \"5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c\": rpc error: code = NotFound desc = could not find container \"5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c\": container with ID starting with 5846a44c901d18202a605398d2407329795d948aa8483531310495fea5121a5c not found: ID does not exist" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.262643 4970 scope.go:117] "RemoveContainer" containerID="e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5" Dec 09 12:32:41 crc kubenswrapper[4970]: E1209 12:32:41.263041 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5\": container with ID starting with e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5 not found: ID does not exist" containerID="e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.263093 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5"} err="failed to get container status \"e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5\": rpc error: code = NotFound desc = could not find container \"e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5\": container with ID starting with e4d6ffc5b2f61330d4c9d4965fd2bb6d05068065683065a0f54d016f0c22adc5 not found: ID does not exist" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.264490 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.268512 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.297570 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-config-data" (OuterVolumeSpecName: "config-data") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.320496 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-server-conf" (OuterVolumeSpecName: "server-conf") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349045 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ltf6\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-kube-api-access-5ltf6\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349083 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349094 4970 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b822b3c-bdfc-4766-b56f-14696c6b34a0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349104 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349115 4970 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349126 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349136 4970 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b822b3c-bdfc-4766-b56f-14696c6b34a0-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349145 4970 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b822b3c-bdfc-4766-b56f-14696c6b34a0-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349175 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.349186 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.408597 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.452014 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.453400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7b822b3c-bdfc-4766-b56f-14696c6b34a0" (UID: "7b822b3c-bdfc-4766-b56f-14696c6b34a0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.529545 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.553895 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.554096 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b822b3c-bdfc-4766-b56f-14696c6b34a0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.593709 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:41 crc kubenswrapper[4970]: E1209 12:32:41.594202 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerName="rabbitmq" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.594224 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerName="rabbitmq" Dec 09 12:32:41 crc kubenswrapper[4970]: E1209 12:32:41.594240 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerName="setup-container" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.594265 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerName="setup-container" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.594511 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" containerName="rabbitmq" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.597497 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.602800 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-9msks" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.603022 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.603133 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.603237 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.603351 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.603987 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.604114 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.638878 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:41 crc kubenswrapper[4970]: E1209 12:32:41.755817 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.759857 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.759934 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.759989 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760019 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760072 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760135 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760227 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760772 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760813 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760854 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqt9\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-kube-api-access-shqt9\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.760897 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.836825 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b822b3c-bdfc-4766-b56f-14696c6b34a0" path="/var/lib/kubelet/pods/7b822b3c-bdfc-4766-b56f-14696c6b34a0/volumes" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.865640 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866110 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866361 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866629 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866699 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866744 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866810 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqt9\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-kube-api-access-shqt9\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866873 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866916 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.866981 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.867065 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.867096 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.867338 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.867578 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.868148 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.868535 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.868973 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.872059 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.873640 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.874829 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.878824 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.886676 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqt9\" (UniqueName: \"kubernetes.io/projected/2547ef6a-6a22-4564-8db0-8c7ed5b166fd-kube-api-access-shqt9\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.940797 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"2547ef6a-6a22-4564-8db0-8c7ed5b166fd\") " pod="openstack/rabbitmq-server-0" Dec 09 12:32:41 crc kubenswrapper[4970]: I1209 12:32:41.993797 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.017478 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.174354 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-pod-info\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.174419 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-tls\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.174442 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-erlang-cookie-secret\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.174592 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-plugins\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.174622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg8fk\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-kube-api-access-gg8fk\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.175741 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.184396 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-plugins-conf\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.184727 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-erlang-cookie\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.184817 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-confd\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.184873 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.184921 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-config-data\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.184958 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-server-conf\") pod \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\" (UID: \"cd722f79-8e7d-46eb-b8e2-6da28c0dead2\") " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.185025 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.185663 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.186073 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.186098 4970 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.186110 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.188489 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-kube-api-access-gg8fk" (OuterVolumeSpecName: "kube-api-access-gg8fk") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "kube-api-access-gg8fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.192623 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.197628 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.200106 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-pod-info" (OuterVolumeSpecName: "pod-info") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.200434 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.215714 4970 generic.go:334] "Generic (PLEG): container finished" podID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerID="4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728" exitCode=0 Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.215749 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd722f79-8e7d-46eb-b8e2-6da28c0dead2","Type":"ContainerDied","Data":"4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728"} Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.215765 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.215806 4970 scope.go:117] "RemoveContainer" containerID="4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.215796 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cd722f79-8e7d-46eb-b8e2-6da28c0dead2","Type":"ContainerDied","Data":"7020f076109b3e8b3e6525521f3d06d751f93db47ac32497216670f665fce689"} Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.233185 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-config-data" (OuterVolumeSpecName: "config-data") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.261439 4970 scope.go:117] "RemoveContainer" containerID="eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.291041 4970 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.291068 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.291077 4970 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.291107 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg8fk\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-kube-api-access-gg8fk\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.291139 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.291149 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.298815 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-server-conf" (OuterVolumeSpecName: "server-conf") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.310559 4970 scope.go:117] "RemoveContainer" containerID="4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728" Dec 09 12:32:42 crc kubenswrapper[4970]: E1209 12:32:42.311220 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728\": container with ID starting with 4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728 not found: ID does not exist" containerID="4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.311259 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728"} err="failed to get container status \"4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728\": rpc error: code = NotFound desc = could not find container \"4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728\": container with ID starting with 4dc4b1b74cd96862d9102c93986cb32a076b44a48446a28e4440647f7c480728 not found: ID does not exist" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.311283 4970 scope.go:117] "RemoveContainer" containerID="eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77" Dec 09 12:32:42 crc kubenswrapper[4970]: E1209 12:32:42.311608 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77\": container with ID starting with eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77 not found: ID does not exist" containerID="eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.311622 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77"} err="failed to get container status \"eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77\": rpc error: code = NotFound desc = could not find container \"eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77\": container with ID starting with eea1871b9671e0c01c99cc6b47415fd1fda6332a9d71396fb468911958eeef77 not found: ID does not exist" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.321610 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.335197 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "cd722f79-8e7d-46eb-b8e2-6da28c0dead2" (UID: "cd722f79-8e7d-46eb-b8e2-6da28c0dead2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.393692 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.393737 4970 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.393748 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cd722f79-8e7d-46eb-b8e2-6da28c0dead2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.436983 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rrprb"] Dec 09 12:32:42 crc kubenswrapper[4970]: E1209 12:32:42.437558 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerName="rabbitmq" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.437579 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerName="rabbitmq" Dec 09 12:32:42 crc kubenswrapper[4970]: E1209 12:32:42.437603 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerName="setup-container" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.437612 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerName="setup-container" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.437893 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" containerName="rabbitmq" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.439951 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.487023 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrprb"] Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.596460 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.605161 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-utilities\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.605240 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pv6\" (UniqueName: \"kubernetes.io/projected/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-kube-api-access-t5pv6\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.605351 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-catalog-content\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.619001 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.651449 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.653968 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.657708 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.658231 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-74gz6" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.659163 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.659365 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.659568 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.659711 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.659895 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.675366 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.707546 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-utilities\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.707609 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5pv6\" (UniqueName: \"kubernetes.io/projected/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-kube-api-access-t5pv6\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.707649 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-catalog-content\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.708105 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-catalog-content\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.708350 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-utilities\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.715909 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.728488 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5pv6\" (UniqueName: \"kubernetes.io/projected/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-kube-api-access-t5pv6\") pod \"community-operators-rrprb\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.767892 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809203 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809267 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809311 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809364 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809408 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809430 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-kube-api-access-hx9lj\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809502 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026b54d0-03a5-4346-b137-1d297204d22b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809538 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809575 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809699 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.809920 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026b54d0-03a5-4346-b137-1d297204d22b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914020 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914090 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914188 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914335 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914415 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914436 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-kube-api-access-hx9lj\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026b54d0-03a5-4346-b137-1d297204d22b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914536 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914577 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914636 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.914791 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026b54d0-03a5-4346-b137-1d297204d22b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.915176 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.916289 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.916520 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.916591 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.917647 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.917969 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/026b54d0-03a5-4346-b137-1d297204d22b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.919287 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.919495 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.921995 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/026b54d0-03a5-4346-b137-1d297204d22b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.923941 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/026b54d0-03a5-4346-b137-1d297204d22b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.943332 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx9lj\" (UniqueName: \"kubernetes.io/projected/026b54d0-03a5-4346-b137-1d297204d22b-kube-api-access-hx9lj\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:42 crc kubenswrapper[4970]: I1209 12:32:42.963411 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"026b54d0-03a5-4346-b137-1d297204d22b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:43 crc kubenswrapper[4970]: I1209 12:32:43.020400 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:32:43 crc kubenswrapper[4970]: I1209 12:32:43.235934 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2547ef6a-6a22-4564-8db0-8c7ed5b166fd","Type":"ContainerStarted","Data":"0807dc02b7a09f65170b7ba44115673ab63871a4bf230cf7287296a6c8d7783f"} Dec 09 12:32:43 crc kubenswrapper[4970]: I1209 12:32:43.322604 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rrprb"] Dec 09 12:32:43 crc kubenswrapper[4970]: I1209 12:32:43.545445 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 12:32:43 crc kubenswrapper[4970]: I1209 12:32:43.825436 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd722f79-8e7d-46eb-b8e2-6da28c0dead2" path="/var/lib/kubelet/pods/cd722f79-8e7d-46eb-b8e2-6da28c0dead2/volumes" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.024459 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-4pmzn"] Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.026927 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.028908 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.036187 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-4pmzn"] Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.151759 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48th\" (UniqueName: \"kubernetes.io/projected/ae772c09-ed58-455f-90d5-9f16e4f5aefb-kube-api-access-d48th\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.151879 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.151908 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.151960 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-config\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.152004 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.152112 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.152341 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256190 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256517 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48th\" (UniqueName: \"kubernetes.io/projected/ae772c09-ed58-455f-90d5-9f16e4f5aefb-kube-api-access-d48th\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256564 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256593 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256646 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-config\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.256696 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.257876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.258175 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.259852 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.259962 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-config\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.260466 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.260550 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.264609 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"026b54d0-03a5-4346-b137-1d297204d22b","Type":"ContainerStarted","Data":"2a7c067f1e5518699c248e4bf5b523b76dceb163deda2c1befcb4b340cde91ee"} Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.281437 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerStarted","Data":"ff835239d9d416541669f0ce034ed52590ae16641971de055980c8ab2e0261b5"} Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.363205 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48th\" (UniqueName: \"kubernetes.io/projected/ae772c09-ed58-455f-90d5-9f16e4f5aefb-kube-api-access-d48th\") pod \"dnsmasq-dns-7d84b4d45c-4pmzn\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.391364 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:44 crc kubenswrapper[4970]: I1209 12:32:44.814137 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:32:44 crc kubenswrapper[4970]: E1209 12:32:44.814701 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:32:45 crc kubenswrapper[4970]: I1209 12:32:45.083064 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-4pmzn"] Dec 09 12:32:45 crc kubenswrapper[4970]: W1209 12:32:45.141721 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae772c09_ed58_455f_90d5_9f16e4f5aefb.slice/crio-4a615d2ddff28b26b3735a3d39280fe9bc5d150e626d15a927e0eb328820623b WatchSource:0}: Error finding container 4a615d2ddff28b26b3735a3d39280fe9bc5d150e626d15a927e0eb328820623b: Status 404 returned error can't find the container with id 4a615d2ddff28b26b3735a3d39280fe9bc5d150e626d15a927e0eb328820623b Dec 09 12:32:45 crc kubenswrapper[4970]: I1209 12:32:45.302101 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" event={"ID":"ae772c09-ed58-455f-90d5-9f16e4f5aefb","Type":"ContainerStarted","Data":"4a615d2ddff28b26b3735a3d39280fe9bc5d150e626d15a927e0eb328820623b"} Dec 09 12:32:45 crc kubenswrapper[4970]: I1209 12:32:45.306298 4970 generic.go:334] "Generic (PLEG): container finished" podID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerID="fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256" exitCode=0 Dec 09 12:32:45 crc kubenswrapper[4970]: I1209 12:32:45.306496 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerDied","Data":"fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256"} Dec 09 12:32:45 crc kubenswrapper[4970]: I1209 12:32:45.308535 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2547ef6a-6a22-4564-8db0-8c7ed5b166fd","Type":"ContainerStarted","Data":"ca027f935fd425843bfe5e0c03705f1e08ef0fff341b1c9c218d4d51d3409cba"} Dec 09 12:32:46 crc kubenswrapper[4970]: I1209 12:32:46.320428 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerStarted","Data":"6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39"} Dec 09 12:32:46 crc kubenswrapper[4970]: I1209 12:32:46.323462 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"026b54d0-03a5-4346-b137-1d297204d22b","Type":"ContainerStarted","Data":"4c8987235a4e2de3d539819ef3e5d6e07ed661df726c15ce8742cd6a2ec4907d"} Dec 09 12:32:46 crc kubenswrapper[4970]: I1209 12:32:46.325294 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerID="9b1fbde0d51b59018f433c7d4c1f3bc8089323c52dcb428fb5cc223de8f43107" exitCode=0 Dec 09 12:32:46 crc kubenswrapper[4970]: I1209 12:32:46.325455 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" event={"ID":"ae772c09-ed58-455f-90d5-9f16e4f5aefb","Type":"ContainerDied","Data":"9b1fbde0d51b59018f433c7d4c1f3bc8089323c52dcb428fb5cc223de8f43107"} Dec 09 12:32:47 crc kubenswrapper[4970]: I1209 12:32:47.344108 4970 generic.go:334] "Generic (PLEG): container finished" podID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerID="6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39" exitCode=0 Dec 09 12:32:47 crc kubenswrapper[4970]: I1209 12:32:47.344398 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerDied","Data":"6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39"} Dec 09 12:32:47 crc kubenswrapper[4970]: I1209 12:32:47.350491 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" event={"ID":"ae772c09-ed58-455f-90d5-9f16e4f5aefb","Type":"ContainerStarted","Data":"95fe8f1182a7d436082fc9b369ff41ea9510031781f8713069122c5a6b30371a"} Dec 09 12:32:47 crc kubenswrapper[4970]: I1209 12:32:47.396933 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" podStartSLOduration=4.396912812 podStartE2EDuration="4.396912812s" podCreationTimestamp="2025-12-09 12:32:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:32:47.392155966 +0000 UTC m=+1579.952637017" watchObservedRunningTime="2025-12-09 12:32:47.396912812 +0000 UTC m=+1579.957393863" Dec 09 12:32:48 crc kubenswrapper[4970]: E1209 12:32:48.261691 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:48 crc kubenswrapper[4970]: E1209 12:32:48.261756 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:48 crc kubenswrapper[4970]: I1209 12:32:48.361394 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:49 crc kubenswrapper[4970]: I1209 12:32:49.377206 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerStarted","Data":"4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27"} Dec 09 12:32:49 crc kubenswrapper[4970]: I1209 12:32:49.399189 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rrprb" podStartSLOduration=4.678160466 podStartE2EDuration="7.399166297s" podCreationTimestamp="2025-12-09 12:32:42 +0000 UTC" firstStartedPulling="2025-12-09 12:32:45.310787853 +0000 UTC m=+1577.871268904" lastFinishedPulling="2025-12-09 12:32:48.031793674 +0000 UTC m=+1580.592274735" observedRunningTime="2025-12-09 12:32:49.395361646 +0000 UTC m=+1581.955842707" watchObservedRunningTime="2025-12-09 12:32:49.399166297 +0000 UTC m=+1581.959647368" Dec 09 12:32:49 crc kubenswrapper[4970]: E1209 12:32:49.595976 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:50 crc kubenswrapper[4970]: E1209 12:32:50.815436 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:32:51 crc kubenswrapper[4970]: I1209 12:32:51.839785 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 12:32:51 crc kubenswrapper[4970]: E1209 12:32:51.948998 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:32:51 crc kubenswrapper[4970]: E1209 12:32:51.949073 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:32:51 crc kubenswrapper[4970]: E1209 12:32:51.949235 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:32:51 crc kubenswrapper[4970]: E1209 12:32:51.950471 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:32:52 crc kubenswrapper[4970]: E1209 12:32:52.414980 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:32:52 crc kubenswrapper[4970]: I1209 12:32:52.768807 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:52 crc kubenswrapper[4970]: I1209 12:32:52.768852 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:52 crc kubenswrapper[4970]: I1209 12:32:52.821518 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:53 crc kubenswrapper[4970]: I1209 12:32:53.487847 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:53 crc kubenswrapper[4970]: I1209 12:32:53.545298 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrprb"] Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.393381 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.456621 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq"] Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.456873 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" podUID="b9b69347-ce23-455a-9ede-a31b66193240" containerName="dnsmasq-dns" containerID="cri-o://fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1" gracePeriod=10 Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.642665 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-jwwtx"] Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.644904 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.690032 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-jwwtx"] Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711046 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-config\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711119 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg74w\" (UniqueName: \"kubernetes.io/projected/bac29da9-2775-4d04-8f3a-2f65bc11940e-kube-api-access-fg74w\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711177 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711236 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711279 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711304 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.711394 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.816077 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.816714 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.817431 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.817505 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-config\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.819736 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg74w\" (UniqueName: \"kubernetes.io/projected/bac29da9-2775-4d04-8f3a-2f65bc11940e-kube-api-access-fg74w\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.819863 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.820008 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.820372 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.820660 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-openstack-edpm-ipam\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.821176 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.821694 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.821738 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-config\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.837018 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bac29da9-2775-4d04-8f3a-2f65bc11940e-dns-svc\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.850218 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg74w\" (UniqueName: \"kubernetes.io/projected/bac29da9-2775-4d04-8f3a-2f65bc11940e-kube-api-access-fg74w\") pod \"dnsmasq-dns-6f6df4f56c-jwwtx\" (UID: \"bac29da9-2775-4d04-8f3a-2f65bc11940e\") " pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:54 crc kubenswrapper[4970]: I1209 12:32:54.974899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.193396 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.233658 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.233743 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-svc\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.233797 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-config\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.233827 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-sb\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.233899 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cc46\" (UniqueName: \"kubernetes.io/projected/b9b69347-ce23-455a-9ede-a31b66193240-kube-api-access-5cc46\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.233972 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-nb\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.256916 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b69347-ce23-455a-9ede-a31b66193240-kube-api-access-5cc46" (OuterVolumeSpecName: "kube-api-access-5cc46") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "kube-api-access-5cc46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.320735 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.323833 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.326842 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-config" (OuterVolumeSpecName: "config") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.332739 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.349387 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352165 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0\") pod \"b9b69347-ce23-455a-9ede-a31b66193240\" (UID: \"b9b69347-ce23-455a-9ede-a31b66193240\") " Dec 09 12:32:55 crc kubenswrapper[4970]: W1209 12:32:55.352361 4970 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b9b69347-ce23-455a-9ede-a31b66193240/volumes/kubernetes.io~configmap/dns-swift-storage-0 Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b9b69347-ce23-455a-9ede-a31b66193240" (UID: "b9b69347-ce23-455a-9ede-a31b66193240"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352885 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352907 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352917 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352928 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352936 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b9b69347-ce23-455a-9ede-a31b66193240-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.352945 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cc46\" (UniqueName: \"kubernetes.io/projected/b9b69347-ce23-455a-9ede-a31b66193240-kube-api-access-5cc46\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.448568 4970 generic.go:334] "Generic (PLEG): container finished" podID="b9b69347-ce23-455a-9ede-a31b66193240" containerID="fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1" exitCode=0 Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.448626 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.448659 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" event={"ID":"b9b69347-ce23-455a-9ede-a31b66193240","Type":"ContainerDied","Data":"fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1"} Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.448703 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq" event={"ID":"b9b69347-ce23-455a-9ede-a31b66193240","Type":"ContainerDied","Data":"18107f7693d9da8ea51fbf90f26f1085eeb741d63b1376c9df52549d0ca34eff"} Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.448722 4970 scope.go:117] "RemoveContainer" containerID="fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.448955 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rrprb" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="registry-server" containerID="cri-o://4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27" gracePeriod=2 Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.484286 4970 scope.go:117] "RemoveContainer" containerID="709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.490696 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq"] Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.501580 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-z6lgq"] Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.518139 4970 scope.go:117] "RemoveContainer" containerID="fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1" Dec 09 12:32:55 crc kubenswrapper[4970]: E1209 12:32:55.518690 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1\": container with ID starting with fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1 not found: ID does not exist" containerID="fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.518737 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1"} err="failed to get container status \"fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1\": rpc error: code = NotFound desc = could not find container \"fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1\": container with ID starting with fb1fc12b58e30a8931a806a91b12b8aa8d1abd90b89a03b9c60837767e4249a1 not found: ID does not exist" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.518769 4970 scope.go:117] "RemoveContainer" containerID="709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c" Dec 09 12:32:55 crc kubenswrapper[4970]: E1209 12:32:55.519112 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c\": container with ID starting with 709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c not found: ID does not exist" containerID="709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.519158 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c"} err="failed to get container status \"709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c\": rpc error: code = NotFound desc = could not find container \"709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c\": container with ID starting with 709ee67de1c4fb22f7b272aa6b456b056599dd19086174271e9b5b7ace49607c not found: ID does not exist" Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.564220 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6df4f56c-jwwtx"] Dec 09 12:32:55 crc kubenswrapper[4970]: W1209 12:32:55.584364 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbac29da9_2775_4d04_8f3a_2f65bc11940e.slice/crio-8db339e3dce063c02cd0fee980db2f6d516f835f24d86f9697d0374090ed607f WatchSource:0}: Error finding container 8db339e3dce063c02cd0fee980db2f6d516f835f24d86f9697d0374090ed607f: Status 404 returned error can't find the container with id 8db339e3dce063c02cd0fee980db2f6d516f835f24d86f9697d0374090ed607f Dec 09 12:32:55 crc kubenswrapper[4970]: I1209 12:32:55.834033 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b69347-ce23-455a-9ede-a31b66193240" path="/var/lib/kubelet/pods/b9b69347-ce23-455a-9ede-a31b66193240/volumes" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.060621 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.201514 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-catalog-content\") pod \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.201645 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5pv6\" (UniqueName: \"kubernetes.io/projected/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-kube-api-access-t5pv6\") pod \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.201947 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-utilities\") pod \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\" (UID: \"95d83ff6-47a4-45c5-a5a0-49c17cd7f309\") " Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.203229 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-utilities" (OuterVolumeSpecName: "utilities") pod "95d83ff6-47a4-45c5-a5a0-49c17cd7f309" (UID: "95d83ff6-47a4-45c5-a5a0-49c17cd7f309"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.209498 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-kube-api-access-t5pv6" (OuterVolumeSpecName: "kube-api-access-t5pv6") pod "95d83ff6-47a4-45c5-a5a0-49c17cd7f309" (UID: "95d83ff6-47a4-45c5-a5a0-49c17cd7f309"). InnerVolumeSpecName "kube-api-access-t5pv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.266895 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95d83ff6-47a4-45c5-a5a0-49c17cd7f309" (UID: "95d83ff6-47a4-45c5-a5a0-49c17cd7f309"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.304633 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.304664 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.304675 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5pv6\" (UniqueName: \"kubernetes.io/projected/95d83ff6-47a4-45c5-a5a0-49c17cd7f309-kube-api-access-t5pv6\") on node \"crc\" DevicePath \"\"" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.460865 4970 generic.go:334] "Generic (PLEG): container finished" podID="bac29da9-2775-4d04-8f3a-2f65bc11940e" containerID="58d8ee7fc74e70ff908e027357024fa938ee193edd956e00b71ec4f12126d337" exitCode=0 Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.460959 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" event={"ID":"bac29da9-2775-4d04-8f3a-2f65bc11940e","Type":"ContainerDied","Data":"58d8ee7fc74e70ff908e027357024fa938ee193edd956e00b71ec4f12126d337"} Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.461216 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" event={"ID":"bac29da9-2775-4d04-8f3a-2f65bc11940e","Type":"ContainerStarted","Data":"8db339e3dce063c02cd0fee980db2f6d516f835f24d86f9697d0374090ed607f"} Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.466422 4970 generic.go:334] "Generic (PLEG): container finished" podID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerID="4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27" exitCode=0 Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.466468 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rrprb" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.466497 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerDied","Data":"4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27"} Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.466526 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rrprb" event={"ID":"95d83ff6-47a4-45c5-a5a0-49c17cd7f309","Type":"ContainerDied","Data":"ff835239d9d416541669f0ce034ed52590ae16641971de055980c8ab2e0261b5"} Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.466546 4970 scope.go:117] "RemoveContainer" containerID="4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.513493 4970 scope.go:117] "RemoveContainer" containerID="6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.523978 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rrprb"] Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.534507 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rrprb"] Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.560102 4970 scope.go:117] "RemoveContainer" containerID="fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.590050 4970 scope.go:117] "RemoveContainer" containerID="4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27" Dec 09 12:32:56 crc kubenswrapper[4970]: E1209 12:32:56.590571 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27\": container with ID starting with 4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27 not found: ID does not exist" containerID="4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.590602 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27"} err="failed to get container status \"4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27\": rpc error: code = NotFound desc = could not find container \"4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27\": container with ID starting with 4e31b27d7733a0a6ddfdce658a4159f8400403fdfda4279ab45ac1c286663a27 not found: ID does not exist" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.590630 4970 scope.go:117] "RemoveContainer" containerID="6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39" Dec 09 12:32:56 crc kubenswrapper[4970]: E1209 12:32:56.591058 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39\": container with ID starting with 6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39 not found: ID does not exist" containerID="6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.591083 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39"} err="failed to get container status \"6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39\": rpc error: code = NotFound desc = could not find container \"6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39\": container with ID starting with 6dadccf502f88fc948fff0d31dab8aac848d61de1f4e8657901a900515560a39 not found: ID does not exist" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.591102 4970 scope.go:117] "RemoveContainer" containerID="fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256" Dec 09 12:32:56 crc kubenswrapper[4970]: E1209 12:32:56.591591 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256\": container with ID starting with fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256 not found: ID does not exist" containerID="fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256" Dec 09 12:32:56 crc kubenswrapper[4970]: I1209 12:32:56.591613 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256"} err="failed to get container status \"fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256\": rpc error: code = NotFound desc = could not find container \"fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256\": container with ID starting with fb776ca29a540b400c7554d5aaab109ac9954e34d61250207a6f129e2f826256 not found: ID does not exist" Dec 09 12:32:57 crc kubenswrapper[4970]: E1209 12:32:57.027814 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:32:57 crc kubenswrapper[4970]: I1209 12:32:57.494645 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" event={"ID":"bac29da9-2775-4d04-8f3a-2f65bc11940e","Type":"ContainerStarted","Data":"c92906e38e17c94bb24a9535f8804f72061fe81202f35ad9d05ed35252047d31"} Dec 09 12:32:57 crc kubenswrapper[4970]: I1209 12:32:57.494789 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:32:57 crc kubenswrapper[4970]: I1209 12:32:57.523744 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" podStartSLOduration=3.523718209 podStartE2EDuration="3.523718209s" podCreationTimestamp="2025-12-09 12:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:32:57.516078066 +0000 UTC m=+1590.076559157" watchObservedRunningTime="2025-12-09 12:32:57.523718209 +0000 UTC m=+1590.084199300" Dec 09 12:32:57 crc kubenswrapper[4970]: I1209 12:32:57.830800 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" path="/var/lib/kubelet/pods/95d83ff6-47a4-45c5-a5a0-49c17cd7f309/volumes" Dec 09 12:32:58 crc kubenswrapper[4970]: I1209 12:32:58.812850 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:32:58 crc kubenswrapper[4970]: E1209 12:32:58.813491 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:32:59 crc kubenswrapper[4970]: E1209 12:32:59.645521 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:33:01 crc kubenswrapper[4970]: E1209 12:33:01.953519 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:33:01 crc kubenswrapper[4970]: E1209 12:33:01.953861 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:33:01 crc kubenswrapper[4970]: E1209 12:33:01.954208 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:33:01 crc kubenswrapper[4970]: E1209 12:33:01.955454 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:33:03 crc kubenswrapper[4970]: E1209 12:33:03.816529 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:33:04 crc kubenswrapper[4970]: I1209 12:33:04.977568 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6df4f56c-jwwtx" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.045698 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-4pmzn"] Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.046023 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerName="dnsmasq-dns" containerID="cri-o://95fe8f1182a7d436082fc9b369ff41ea9510031781f8713069122c5a6b30371a" gracePeriod=10 Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.591805 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerID="95fe8f1182a7d436082fc9b369ff41ea9510031781f8713069122c5a6b30371a" exitCode=0 Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.591924 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" event={"ID":"ae772c09-ed58-455f-90d5-9f16e4f5aefb","Type":"ContainerDied","Data":"95fe8f1182a7d436082fc9b369ff41ea9510031781f8713069122c5a6b30371a"} Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.592082 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" event={"ID":"ae772c09-ed58-455f-90d5-9f16e4f5aefb","Type":"ContainerDied","Data":"4a615d2ddff28b26b3735a3d39280fe9bc5d150e626d15a927e0eb328820623b"} Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.592104 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a615d2ddff28b26b3735a3d39280fe9bc5d150e626d15a927e0eb328820623b" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.625006 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764083 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-nb\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764343 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-svc\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764467 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-openstack-edpm-ipam\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764530 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-sb\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764556 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-config\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764621 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d48th\" (UniqueName: \"kubernetes.io/projected/ae772c09-ed58-455f-90d5-9f16e4f5aefb-kube-api-access-d48th\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.764647 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-swift-storage-0\") pod \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\" (UID: \"ae772c09-ed58-455f-90d5-9f16e4f5aefb\") " Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.778300 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae772c09-ed58-455f-90d5-9f16e4f5aefb-kube-api-access-d48th" (OuterVolumeSpecName: "kube-api-access-d48th") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "kube-api-access-d48th". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.837522 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.839735 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.840873 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.842403 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.849747 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.867845 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d48th\" (UniqueName: \"kubernetes.io/projected/ae772c09-ed58-455f-90d5-9f16e4f5aefb-kube-api-access-d48th\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.867885 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.867895 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.867906 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.867934 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.867956 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.878594 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-config" (OuterVolumeSpecName: "config") pod "ae772c09-ed58-455f-90d5-9f16e4f5aefb" (UID: "ae772c09-ed58-455f-90d5-9f16e4f5aefb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:33:05 crc kubenswrapper[4970]: I1209 12:33:05.970296 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae772c09-ed58-455f-90d5-9f16e4f5aefb-config\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:06 crc kubenswrapper[4970]: I1209 12:33:06.606027 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-4pmzn" Dec 09 12:33:06 crc kubenswrapper[4970]: I1209 12:33:06.639423 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-4pmzn"] Dec 09 12:33:06 crc kubenswrapper[4970]: I1209 12:33:06.652132 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-4pmzn"] Dec 09 12:33:07 crc kubenswrapper[4970]: I1209 12:33:07.827067 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" path="/var/lib/kubelet/pods/ae772c09-ed58-455f-90d5-9f16e4f5aefb/volumes" Dec 09 12:33:09 crc kubenswrapper[4970]: E1209 12:33:09.983580 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:33:10 crc kubenswrapper[4970]: I1209 12:33:10.813053 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:33:10 crc kubenswrapper[4970]: E1209 12:33:10.813387 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:33:11 crc kubenswrapper[4970]: E1209 12:33:11.754268 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:33:14 crc kubenswrapper[4970]: E1209 12:33:14.929899 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:33:14 crc kubenswrapper[4970]: E1209 12:33:14.930271 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:33:14 crc kubenswrapper[4970]: E1209 12:33:14.930446 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:33:14 crc kubenswrapper[4970]: E1209 12:33:14.931952 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127178 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7"] Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127761 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b69347-ce23-455a-9ede-a31b66193240" containerName="dnsmasq-dns" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127783 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b69347-ce23-455a-9ede-a31b66193240" containerName="dnsmasq-dns" Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127803 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="registry-server" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127810 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="registry-server" Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127823 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerName="init" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127834 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerName="init" Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127861 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b69347-ce23-455a-9ede-a31b66193240" containerName="init" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127868 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b69347-ce23-455a-9ede-a31b66193240" containerName="init" Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127907 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="extract-content" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127914 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="extract-content" Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127936 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="extract-utilities" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127943 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="extract-utilities" Dec 09 12:33:15 crc kubenswrapper[4970]: E1209 12:33:15.127954 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerName="dnsmasq-dns" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.127961 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerName="dnsmasq-dns" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.128202 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae772c09-ed58-455f-90d5-9f16e4f5aefb" containerName="dnsmasq-dns" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.128230 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="95d83ff6-47a4-45c5-a5a0-49c17cd7f309" containerName="registry-server" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.128279 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b69347-ce23-455a-9ede-a31b66193240" containerName="dnsmasq-dns" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.129234 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.136881 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.137635 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.137789 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.137933 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.155539 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7"] Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.304082 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.304149 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8hd\" (UniqueName: \"kubernetes.io/projected/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-kube-api-access-zd8hd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.304183 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.304237 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.406993 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.407057 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8hd\" (UniqueName: \"kubernetes.io/projected/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-kube-api-access-zd8hd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.407091 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.407168 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.415098 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.416893 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.416903 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.426550 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8hd\" (UniqueName: \"kubernetes.io/projected/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-kube-api-access-zd8hd\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:15 crc kubenswrapper[4970]: I1209 12:33:15.457995 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:16 crc kubenswrapper[4970]: I1209 12:33:16.156901 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7"] Dec 09 12:33:16 crc kubenswrapper[4970]: I1209 12:33:16.732627 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" event={"ID":"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a","Type":"ContainerStarted","Data":"54653bb74ea93cb4b26568499638013453add968fd0a01d61d922029271f4da6"} Dec 09 12:33:16 crc kubenswrapper[4970]: E1209 12:33:16.816160 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:33:17 crc kubenswrapper[4970]: I1209 12:33:17.749407 4970 generic.go:334] "Generic (PLEG): container finished" podID="026b54d0-03a5-4346-b137-1d297204d22b" containerID="4c8987235a4e2de3d539819ef3e5d6e07ed661df726c15ce8742cd6a2ec4907d" exitCode=0 Dec 09 12:33:17 crc kubenswrapper[4970]: I1209 12:33:17.749508 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"026b54d0-03a5-4346-b137-1d297204d22b","Type":"ContainerDied","Data":"4c8987235a4e2de3d539819ef3e5d6e07ed661df726c15ce8742cd6a2ec4907d"} Dec 09 12:33:17 crc kubenswrapper[4970]: I1209 12:33:17.757238 4970 generic.go:334] "Generic (PLEG): container finished" podID="2547ef6a-6a22-4564-8db0-8c7ed5b166fd" containerID="ca027f935fd425843bfe5e0c03705f1e08ef0fff341b1c9c218d4d51d3409cba" exitCode=0 Dec 09 12:33:17 crc kubenswrapper[4970]: I1209 12:33:17.757303 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2547ef6a-6a22-4564-8db0-8c7ed5b166fd","Type":"ContainerDied","Data":"ca027f935fd425843bfe5e0c03705f1e08ef0fff341b1c9c218d4d51d3409cba"} Dec 09 12:33:18 crc kubenswrapper[4970]: I1209 12:33:18.772880 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"2547ef6a-6a22-4564-8db0-8c7ed5b166fd","Type":"ContainerStarted","Data":"3cd07609427adcc3274d25d890d3f527028c878e2528ee099a54b222a1b47c0b"} Dec 09 12:33:18 crc kubenswrapper[4970]: I1209 12:33:18.773378 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 12:33:18 crc kubenswrapper[4970]: I1209 12:33:18.775334 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"026b54d0-03a5-4346-b137-1d297204d22b","Type":"ContainerStarted","Data":"a07b6a8cedb7b37761185ea233e776c98a9c0baebd5fb1b60c0cd069060e80a5"} Dec 09 12:33:18 crc kubenswrapper[4970]: I1209 12:33:18.775925 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:33:18 crc kubenswrapper[4970]: I1209 12:33:18.810359 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.810340131 podStartE2EDuration="37.810340131s" podCreationTimestamp="2025-12-09 12:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:33:18.802903044 +0000 UTC m=+1611.363384105" watchObservedRunningTime="2025-12-09 12:33:18.810340131 +0000 UTC m=+1611.370821182" Dec 09 12:33:18 crc kubenswrapper[4970]: I1209 12:33:18.837538 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.837518652 podStartE2EDuration="36.837518652s" podCreationTimestamp="2025-12-09 12:32:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 12:33:18.824664441 +0000 UTC m=+1611.385145492" watchObservedRunningTime="2025-12-09 12:33:18.837518652 +0000 UTC m=+1611.397999703" Dec 09 12:33:20 crc kubenswrapper[4970]: E1209 12:33:20.313415 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache]" Dec 09 12:33:25 crc kubenswrapper[4970]: I1209 12:33:25.813568 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:33:25 crc kubenswrapper[4970]: E1209 12:33:25.814279 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:33:25 crc kubenswrapper[4970]: E1209 12:33:25.814880 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:33:27 crc kubenswrapper[4970]: E1209 12:33:27.007282 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ac190c_b9e3_4453_9581_58141f4f59cc.slice/crio-7988d46ab587f6dc5108584a799e4404bd720618e722f4fdadeadfb949abf864\": RecentStats: unable to find data in memory cache]" Dec 09 12:33:27 crc kubenswrapper[4970]: E1209 12:33:27.868120 4970 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/91bdd06ebca92acbe49ffd1c6f1f38c53c72a27fb060d15f470835e727a5399e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/91bdd06ebca92acbe49ffd1c6f1f38c53c72a27fb060d15f470835e727a5399e/diff: no such file or directory, extraDiskErr: Dec 09 12:33:28 crc kubenswrapper[4970]: I1209 12:33:28.939144 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" event={"ID":"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a","Type":"ContainerStarted","Data":"78783ad29f5c9b34c7ae8891c2ce4d3f2076dd0e3b8b945d693a7f54255be90f"} Dec 09 12:33:28 crc kubenswrapper[4970]: I1209 12:33:28.961499 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" podStartSLOduration=2.407189058 podStartE2EDuration="13.961481474s" podCreationTimestamp="2025-12-09 12:33:15 +0000 UTC" firstStartedPulling="2025-12-09 12:33:16.156114101 +0000 UTC m=+1608.716595172" lastFinishedPulling="2025-12-09 12:33:27.710406547 +0000 UTC m=+1620.270887588" observedRunningTime="2025-12-09 12:33:28.953991605 +0000 UTC m=+1621.514472666" watchObservedRunningTime="2025-12-09 12:33:28.961481474 +0000 UTC m=+1621.521962525" Dec 09 12:33:30 crc kubenswrapper[4970]: E1209 12:33:30.815498 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:33:31 crc kubenswrapper[4970]: I1209 12:33:31.996296 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="2547ef6a-6a22-4564-8db0-8c7ed5b166fd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.8:5671: connect: connection refused" Dec 09 12:33:33 crc kubenswrapper[4970]: I1209 12:33:33.023436 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 12:33:37 crc kubenswrapper[4970]: I1209 12:33:37.825033 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:33:37 crc kubenswrapper[4970]: E1209 12:33:37.826148 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:33:37 crc kubenswrapper[4970]: E1209 12:33:37.827140 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:33:41 crc kubenswrapper[4970]: I1209 12:33:41.072201 4970 generic.go:334] "Generic (PLEG): container finished" podID="dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" containerID="78783ad29f5c9b34c7ae8891c2ce4d3f2076dd0e3b8b945d693a7f54255be90f" exitCode=0 Dec 09 12:33:41 crc kubenswrapper[4970]: I1209 12:33:41.072308 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" event={"ID":"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a","Type":"ContainerDied","Data":"78783ad29f5c9b34c7ae8891c2ce4d3f2076dd0e3b8b945d693a7f54255be90f"} Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.043179 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.715705 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.844323 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-ssh-key\") pod \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.844391 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-inventory\") pod \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.844432 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd8hd\" (UniqueName: \"kubernetes.io/projected/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-kube-api-access-zd8hd\") pod \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.844507 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-repo-setup-combined-ca-bundle\") pod \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\" (UID: \"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a\") " Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.850060 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-kube-api-access-zd8hd" (OuterVolumeSpecName: "kube-api-access-zd8hd") pod "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" (UID: "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a"). InnerVolumeSpecName "kube-api-access-zd8hd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.851468 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" (UID: "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.883460 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" (UID: "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.884703 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-inventory" (OuterVolumeSpecName: "inventory") pod "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" (UID: "dbc6cc54-344d-47db-aae7-ce10a0b4ea3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:42 crc kubenswrapper[4970]: E1209 12:33:42.939242 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:33:42 crc kubenswrapper[4970]: E1209 12:33:42.939329 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:33:42 crc kubenswrapper[4970]: E1209 12:33:42.939470 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:33:42 crc kubenswrapper[4970]: E1209 12:33:42.940730 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.948390 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.950143 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.950158 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd8hd\" (UniqueName: \"kubernetes.io/projected/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-kube-api-access-zd8hd\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:42 crc kubenswrapper[4970]: I1209 12:33:42.950170 4970 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbc6cc54-344d-47db-aae7-ce10a0b4ea3a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.100840 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" event={"ID":"dbc6cc54-344d-47db-aae7-ce10a0b4ea3a","Type":"ContainerDied","Data":"54653bb74ea93cb4b26568499638013453add968fd0a01d61d922029271f4da6"} Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.100920 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54653bb74ea93cb4b26568499638013453add968fd0a01d61d922029271f4da6" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.100985 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.187304 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf"] Dec 09 12:33:43 crc kubenswrapper[4970]: E1209 12:33:43.187964 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.187988 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.188325 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbc6cc54-344d-47db-aae7-ce10a0b4ea3a" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.189281 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.192346 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.192469 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.192756 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.192837 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.199043 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf"] Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.256085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghvnp\" (UniqueName: \"kubernetes.io/projected/bf06b570-9bab-4378-9c0c-64d8faabea85-kube-api-access-ghvnp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.256130 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.256194 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.358833 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghvnp\" (UniqueName: \"kubernetes.io/projected/bf06b570-9bab-4378-9c0c-64d8faabea85-kube-api-access-ghvnp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.358888 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.358967 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.364689 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.365058 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.377211 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghvnp\" (UniqueName: \"kubernetes.io/projected/bf06b570-9bab-4378-9c0c-64d8faabea85-kube-api-access-ghvnp\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9vdrf\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:43 crc kubenswrapper[4970]: I1209 12:33:43.517749 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:44 crc kubenswrapper[4970]: I1209 12:33:44.228629 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf"] Dec 09 12:33:44 crc kubenswrapper[4970]: W1209 12:33:44.239123 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf06b570_9bab_4378_9c0c_64d8faabea85.slice/crio-c65da421fba3f1968565f3d164c45e906bd3cff928eb807ba3a49c721bbed241 WatchSource:0}: Error finding container c65da421fba3f1968565f3d164c45e906bd3cff928eb807ba3a49c721bbed241: Status 404 returned error can't find the container with id c65da421fba3f1968565f3d164c45e906bd3cff928eb807ba3a49c721bbed241 Dec 09 12:33:45 crc kubenswrapper[4970]: I1209 12:33:45.123306 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" event={"ID":"bf06b570-9bab-4378-9c0c-64d8faabea85","Type":"ContainerStarted","Data":"ec7731e13e72d89a39205b0976e633f38a86401c3b7587e3eeb039d178848bf0"} Dec 09 12:33:45 crc kubenswrapper[4970]: I1209 12:33:45.123659 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" event={"ID":"bf06b570-9bab-4378-9c0c-64d8faabea85","Type":"ContainerStarted","Data":"c65da421fba3f1968565f3d164c45e906bd3cff928eb807ba3a49c721bbed241"} Dec 09 12:33:45 crc kubenswrapper[4970]: I1209 12:33:45.166324 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" podStartSLOduration=1.699918602 podStartE2EDuration="2.166298481s" podCreationTimestamp="2025-12-09 12:33:43 +0000 UTC" firstStartedPulling="2025-12-09 12:33:44.240639342 +0000 UTC m=+1636.801120393" lastFinishedPulling="2025-12-09 12:33:44.707019221 +0000 UTC m=+1637.267500272" observedRunningTime="2025-12-09 12:33:45.140591119 +0000 UTC m=+1637.701072170" watchObservedRunningTime="2025-12-09 12:33:45.166298481 +0000 UTC m=+1637.726779552" Dec 09 12:33:48 crc kubenswrapper[4970]: I1209 12:33:48.158392 4970 generic.go:334] "Generic (PLEG): container finished" podID="bf06b570-9bab-4378-9c0c-64d8faabea85" containerID="ec7731e13e72d89a39205b0976e633f38a86401c3b7587e3eeb039d178848bf0" exitCode=0 Dec 09 12:33:48 crc kubenswrapper[4970]: I1209 12:33:48.158471 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" event={"ID":"bf06b570-9bab-4378-9c0c-64d8faabea85","Type":"ContainerDied","Data":"ec7731e13e72d89a39205b0976e633f38a86401c3b7587e3eeb039d178848bf0"} Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.773427 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.925036 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghvnp\" (UniqueName: \"kubernetes.io/projected/bf06b570-9bab-4378-9c0c-64d8faabea85-kube-api-access-ghvnp\") pod \"bf06b570-9bab-4378-9c0c-64d8faabea85\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.925074 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-ssh-key\") pod \"bf06b570-9bab-4378-9c0c-64d8faabea85\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.925169 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-inventory\") pod \"bf06b570-9bab-4378-9c0c-64d8faabea85\" (UID: \"bf06b570-9bab-4378-9c0c-64d8faabea85\") " Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.937303 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf06b570-9bab-4378-9c0c-64d8faabea85-kube-api-access-ghvnp" (OuterVolumeSpecName: "kube-api-access-ghvnp") pod "bf06b570-9bab-4378-9c0c-64d8faabea85" (UID: "bf06b570-9bab-4378-9c0c-64d8faabea85"). InnerVolumeSpecName "kube-api-access-ghvnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.959563 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-inventory" (OuterVolumeSpecName: "inventory") pod "bf06b570-9bab-4378-9c0c-64d8faabea85" (UID: "bf06b570-9bab-4378-9c0c-64d8faabea85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:49 crc kubenswrapper[4970]: I1209 12:33:49.961613 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf06b570-9bab-4378-9c0c-64d8faabea85" (UID: "bf06b570-9bab-4378-9c0c-64d8faabea85"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.028265 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghvnp\" (UniqueName: \"kubernetes.io/projected/bf06b570-9bab-4378-9c0c-64d8faabea85-kube-api-access-ghvnp\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.028303 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.028314 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf06b570-9bab-4378-9c0c-64d8faabea85-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.187017 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" event={"ID":"bf06b570-9bab-4378-9c0c-64d8faabea85","Type":"ContainerDied","Data":"c65da421fba3f1968565f3d164c45e906bd3cff928eb807ba3a49c721bbed241"} Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.187388 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65da421fba3f1968565f3d164c45e906bd3cff928eb807ba3a49c721bbed241" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.187073 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9vdrf" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.265237 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg"] Dec 09 12:33:50 crc kubenswrapper[4970]: E1209 12:33:50.265835 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf06b570-9bab-4378-9c0c-64d8faabea85" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.265859 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf06b570-9bab-4378-9c0c-64d8faabea85" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.266085 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf06b570-9bab-4378-9c0c-64d8faabea85" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.266893 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.268809 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.268946 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.269083 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.269141 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.284427 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg"] Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.436332 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.436427 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.436683 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrm8\" (UniqueName: \"kubernetes.io/projected/ee93e83b-cc64-4847-8245-0e5e002f9540-kube-api-access-rvrm8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.437178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.540120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.540434 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.540934 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.541133 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvrm8\" (UniqueName: \"kubernetes.io/projected/ee93e83b-cc64-4847-8245-0e5e002f9540-kube-api-access-rvrm8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.546784 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.547179 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.549324 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.567781 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvrm8\" (UniqueName: \"kubernetes.io/projected/ee93e83b-cc64-4847-8245-0e5e002f9540-kube-api-access-rvrm8\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:50 crc kubenswrapper[4970]: I1209 12:33:50.585738 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:33:51 crc kubenswrapper[4970]: I1209 12:33:51.216009 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg"] Dec 09 12:33:51 crc kubenswrapper[4970]: I1209 12:33:51.814458 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:33:51 crc kubenswrapper[4970]: E1209 12:33:51.815260 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:33:51 crc kubenswrapper[4970]: E1209 12:33:51.815452 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:33:52 crc kubenswrapper[4970]: I1209 12:33:52.209876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" event={"ID":"ee93e83b-cc64-4847-8245-0e5e002f9540","Type":"ContainerStarted","Data":"7a956e5ed1e4f58c1145adf4c288a1b0e2037d2272ab2aee6650216f5b24ae6c"} Dec 09 12:33:52 crc kubenswrapper[4970]: I1209 12:33:52.210271 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" event={"ID":"ee93e83b-cc64-4847-8245-0e5e002f9540","Type":"ContainerStarted","Data":"c9dd10342617dcae7df92af372a0c87bb1f8ca4339f7bfa9084726aee918f643"} Dec 09 12:33:52 crc kubenswrapper[4970]: I1209 12:33:52.238882 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" podStartSLOduration=1.8426095550000001 podStartE2EDuration="2.238863892s" podCreationTimestamp="2025-12-09 12:33:50 +0000 UTC" firstStartedPulling="2025-12-09 12:33:51.217586035 +0000 UTC m=+1643.778067086" lastFinishedPulling="2025-12-09 12:33:51.613840372 +0000 UTC m=+1644.174321423" observedRunningTime="2025-12-09 12:33:52.228946148 +0000 UTC m=+1644.789427199" watchObservedRunningTime="2025-12-09 12:33:52.238863892 +0000 UTC m=+1644.799344943" Dec 09 12:33:54 crc kubenswrapper[4970]: E1209 12:33:54.814716 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:34:02 crc kubenswrapper[4970]: E1209 12:34:02.946905 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:34:02 crc kubenswrapper[4970]: E1209 12:34:02.947593 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:34:02 crc kubenswrapper[4970]: E1209 12:34:02.947733 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:34:02 crc kubenswrapper[4970]: E1209 12:34:02.948865 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:34:03 crc kubenswrapper[4970]: I1209 12:34:03.813028 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:34:03 crc kubenswrapper[4970]: E1209 12:34:03.813556 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:34:05 crc kubenswrapper[4970]: E1209 12:34:05.816431 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:34:14 crc kubenswrapper[4970]: I1209 12:34:14.813907 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:34:14 crc kubenswrapper[4970]: E1209 12:34:14.814723 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:34:14 crc kubenswrapper[4970]: E1209 12:34:14.815496 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:34:19 crc kubenswrapper[4970]: E1209 12:34:19.814708 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:34:28 crc kubenswrapper[4970]: E1209 12:34:28.814616 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:34:29 crc kubenswrapper[4970]: I1209 12:34:29.812658 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:34:29 crc kubenswrapper[4970]: E1209 12:34:29.813208 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:34:32 crc kubenswrapper[4970]: E1209 12:34:32.815210 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:34:36 crc kubenswrapper[4970]: I1209 12:34:36.343023 4970 scope.go:117] "RemoveContainer" containerID="922d19a539a2a759d1325ebec5dc39d38028ddf461b942be19c26a3e816870d4" Dec 09 12:34:36 crc kubenswrapper[4970]: I1209 12:34:36.376654 4970 scope.go:117] "RemoveContainer" containerID="c026a0b15e75aaa504d98014131fedcc52b716b97ecc6a3a92e86fb741eed643" Dec 09 12:34:39 crc kubenswrapper[4970]: E1209 12:34:39.815857 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:34:41 crc kubenswrapper[4970]: I1209 12:34:41.812363 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:34:41 crc kubenswrapper[4970]: E1209 12:34:41.812975 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:34:44 crc kubenswrapper[4970]: E1209 12:34:44.815429 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:34:53 crc kubenswrapper[4970]: E1209 12:34:53.815007 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:34:55 crc kubenswrapper[4970]: I1209 12:34:55.813335 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:34:55 crc kubenswrapper[4970]: E1209 12:34:55.814015 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:34:59 crc kubenswrapper[4970]: E1209 12:34:59.815206 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:35:05 crc kubenswrapper[4970]: E1209 12:35:05.814884 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:35:07 crc kubenswrapper[4970]: I1209 12:35:07.822782 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:35:07 crc kubenswrapper[4970]: E1209 12:35:07.823647 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:35:13 crc kubenswrapper[4970]: E1209 12:35:13.904007 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:35:13 crc kubenswrapper[4970]: E1209 12:35:13.905490 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:35:13 crc kubenswrapper[4970]: E1209 12:35:13.905629 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:35:13 crc kubenswrapper[4970]: E1209 12:35:13.907112 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:35:17 crc kubenswrapper[4970]: E1209 12:35:17.831204 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:35:18 crc kubenswrapper[4970]: I1209 12:35:18.813535 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:35:18 crc kubenswrapper[4970]: E1209 12:35:18.813987 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:35:26 crc kubenswrapper[4970]: E1209 12:35:26.815588 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:35:29 crc kubenswrapper[4970]: E1209 12:35:29.924224 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:35:29 crc kubenswrapper[4970]: E1209 12:35:29.924816 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:35:29 crc kubenswrapper[4970]: E1209 12:35:29.924947 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:35:29 crc kubenswrapper[4970]: E1209 12:35:29.926120 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:35:33 crc kubenswrapper[4970]: I1209 12:35:33.812967 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:35:33 crc kubenswrapper[4970]: E1209 12:35:33.813708 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:35:36 crc kubenswrapper[4970]: I1209 12:35:36.514306 4970 scope.go:117] "RemoveContainer" containerID="a795b099d9de0c3443e1177afd2019e48a789a4b2d55d71ad9012c81cef0081c" Dec 09 12:35:39 crc kubenswrapper[4970]: E1209 12:35:39.816313 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:35:41 crc kubenswrapper[4970]: E1209 12:35:41.816056 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:35:46 crc kubenswrapper[4970]: I1209 12:35:46.812844 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:35:46 crc kubenswrapper[4970]: E1209 12:35:46.813790 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:35:51 crc kubenswrapper[4970]: E1209 12:35:51.814731 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:35:54 crc kubenswrapper[4970]: E1209 12:35:54.816051 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:35:57 crc kubenswrapper[4970]: I1209 12:35:57.822817 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:35:57 crc kubenswrapper[4970]: E1209 12:35:57.823551 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:36:04 crc kubenswrapper[4970]: E1209 12:36:04.818821 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:36:09 crc kubenswrapper[4970]: E1209 12:36:09.816926 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:36:10 crc kubenswrapper[4970]: I1209 12:36:10.813508 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:36:10 crc kubenswrapper[4970]: E1209 12:36:10.814061 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:36:17 crc kubenswrapper[4970]: E1209 12:36:17.822114 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:36:22 crc kubenswrapper[4970]: I1209 12:36:22.813604 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:36:22 crc kubenswrapper[4970]: E1209 12:36:22.814380 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:36:22 crc kubenswrapper[4970]: E1209 12:36:22.816149 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:36:30 crc kubenswrapper[4970]: E1209 12:36:30.815736 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:36:33 crc kubenswrapper[4970]: E1209 12:36:33.816018 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:36:35 crc kubenswrapper[4970]: I1209 12:36:35.813822 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:36:35 crc kubenswrapper[4970]: E1209 12:36:35.814544 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:36:36 crc kubenswrapper[4970]: I1209 12:36:36.682425 4970 scope.go:117] "RemoveContainer" containerID="afb2a4fb1548b87960aa2974a3604e90d300c9f5c9847f33b1389331499c3400" Dec 09 12:36:36 crc kubenswrapper[4970]: I1209 12:36:36.729498 4970 scope.go:117] "RemoveContainer" containerID="536942f21382b461538aaf2c7951d070847c4c2a39c868397d6038d7fb469888" Dec 09 12:36:36 crc kubenswrapper[4970]: I1209 12:36:36.759368 4970 scope.go:117] "RemoveContainer" containerID="2c89b881ea482b9fb051f81454579c0248a7fb935bbb4171a1e5cd971fa97d51" Dec 09 12:36:36 crc kubenswrapper[4970]: I1209 12:36:36.782378 4970 scope.go:117] "RemoveContainer" containerID="64118f2e3cce3902d00a64d3a2d45489f25bf4a709cab01ba76a7d490ca50a72" Dec 09 12:36:36 crc kubenswrapper[4970]: I1209 12:36:36.804688 4970 scope.go:117] "RemoveContainer" containerID="cca4d2d176b2246bcfa423ef6fab3fac40f63a41751b5598c23c40efdf105eef" Dec 09 12:36:45 crc kubenswrapper[4970]: E1209 12:36:45.814935 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:36:48 crc kubenswrapper[4970]: E1209 12:36:48.816442 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:36:49 crc kubenswrapper[4970]: I1209 12:36:49.813175 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:36:49 crc kubenswrapper[4970]: E1209 12:36:49.814028 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:36:59 crc kubenswrapper[4970]: E1209 12:36:59.818858 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:37:00 crc kubenswrapper[4970]: I1209 12:37:00.813624 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:37:00 crc kubenswrapper[4970]: E1209 12:37:00.814216 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:37:00 crc kubenswrapper[4970]: E1209 12:37:00.815028 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:37:09 crc kubenswrapper[4970]: I1209 12:37:09.600320 4970 generic.go:334] "Generic (PLEG): container finished" podID="ee93e83b-cc64-4847-8245-0e5e002f9540" containerID="7a956e5ed1e4f58c1145adf4c288a1b0e2037d2272ab2aee6650216f5b24ae6c" exitCode=0 Dec 09 12:37:09 crc kubenswrapper[4970]: I1209 12:37:09.600415 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" event={"ID":"ee93e83b-cc64-4847-8245-0e5e002f9540","Type":"ContainerDied","Data":"7a956e5ed1e4f58c1145adf4c288a1b0e2037d2272ab2aee6650216f5b24ae6c"} Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.086904 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.133050 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-ssh-key\") pod \"ee93e83b-cc64-4847-8245-0e5e002f9540\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.133197 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-bootstrap-combined-ca-bundle\") pod \"ee93e83b-cc64-4847-8245-0e5e002f9540\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.133282 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-inventory\") pod \"ee93e83b-cc64-4847-8245-0e5e002f9540\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.133310 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvrm8\" (UniqueName: \"kubernetes.io/projected/ee93e83b-cc64-4847-8245-0e5e002f9540-kube-api-access-rvrm8\") pod \"ee93e83b-cc64-4847-8245-0e5e002f9540\" (UID: \"ee93e83b-cc64-4847-8245-0e5e002f9540\") " Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.139700 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee93e83b-cc64-4847-8245-0e5e002f9540-kube-api-access-rvrm8" (OuterVolumeSpecName: "kube-api-access-rvrm8") pod "ee93e83b-cc64-4847-8245-0e5e002f9540" (UID: "ee93e83b-cc64-4847-8245-0e5e002f9540"). InnerVolumeSpecName "kube-api-access-rvrm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.139781 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ee93e83b-cc64-4847-8245-0e5e002f9540" (UID: "ee93e83b-cc64-4847-8245-0e5e002f9540"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.183778 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ee93e83b-cc64-4847-8245-0e5e002f9540" (UID: "ee93e83b-cc64-4847-8245-0e5e002f9540"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.184648 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-inventory" (OuterVolumeSpecName: "inventory") pod "ee93e83b-cc64-4847-8245-0e5e002f9540" (UID: "ee93e83b-cc64-4847-8245-0e5e002f9540"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.236861 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.236914 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvrm8\" (UniqueName: \"kubernetes.io/projected/ee93e83b-cc64-4847-8245-0e5e002f9540-kube-api-access-rvrm8\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.236934 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.236952 4970 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee93e83b-cc64-4847-8245-0e5e002f9540-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.638095 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" event={"ID":"ee93e83b-cc64-4847-8245-0e5e002f9540","Type":"ContainerDied","Data":"c9dd10342617dcae7df92af372a0c87bb1f8ca4339f7bfa9084726aee918f643"} Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.638142 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9dd10342617dcae7df92af372a0c87bb1f8ca4339f7bfa9084726aee918f643" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.638203 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.738813 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg"] Dec 09 12:37:11 crc kubenswrapper[4970]: E1209 12:37:11.739727 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee93e83b-cc64-4847-8245-0e5e002f9540" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.739752 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee93e83b-cc64-4847-8245-0e5e002f9540" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.740036 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee93e83b-cc64-4847-8245-0e5e002f9540" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.740998 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.744063 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.744160 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.744094 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.744105 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.769418 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg"] Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.852841 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.853225 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbrss\" (UniqueName: \"kubernetes.io/projected/b45bf430-1223-44e1-b791-212935f09b2a-kube-api-access-mbrss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.853378 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.955946 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.956140 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbrss\" (UniqueName: \"kubernetes.io/projected/b45bf430-1223-44e1-b791-212935f09b2a-kube-api-access-mbrss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.956180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.959900 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.960059 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:11 crc kubenswrapper[4970]: I1209 12:37:11.984195 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbrss\" (UniqueName: \"kubernetes.io/projected/b45bf430-1223-44e1-b791-212935f09b2a-kube-api-access-mbrss\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:12 crc kubenswrapper[4970]: I1209 12:37:12.069755 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:37:12 crc kubenswrapper[4970]: I1209 12:37:12.618620 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg"] Dec 09 12:37:12 crc kubenswrapper[4970]: I1209 12:37:12.619701 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:37:12 crc kubenswrapper[4970]: I1209 12:37:12.648890 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" event={"ID":"b45bf430-1223-44e1-b791-212935f09b2a","Type":"ContainerStarted","Data":"bec57bd1e018fce4194645b96cb972458569225edf7658c2de0b2b6ad84a0184"} Dec 09 12:37:12 crc kubenswrapper[4970]: E1209 12:37:12.814147 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:37:13 crc kubenswrapper[4970]: I1209 12:37:13.664196 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" event={"ID":"b45bf430-1223-44e1-b791-212935f09b2a","Type":"ContainerStarted","Data":"364a999db7ce156ba81da0e43796f1933c4e7f65554562227d6499d02ce1102e"} Dec 09 12:37:13 crc kubenswrapper[4970]: I1209 12:37:13.690085 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" podStartSLOduration=1.980352626 podStartE2EDuration="2.690066487s" podCreationTimestamp="2025-12-09 12:37:11 +0000 UTC" firstStartedPulling="2025-12-09 12:37:12.619442045 +0000 UTC m=+1845.179923096" lastFinishedPulling="2025-12-09 12:37:13.329155916 +0000 UTC m=+1845.889636957" observedRunningTime="2025-12-09 12:37:13.677789261 +0000 UTC m=+1846.238270312" watchObservedRunningTime="2025-12-09 12:37:13.690066487 +0000 UTC m=+1846.250547538" Dec 09 12:37:14 crc kubenswrapper[4970]: I1209 12:37:14.814786 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:37:14 crc kubenswrapper[4970]: E1209 12:37:14.815858 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:37:14 crc kubenswrapper[4970]: E1209 12:37:14.815965 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:37:25 crc kubenswrapper[4970]: I1209 12:37:25.814552 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:37:25 crc kubenswrapper[4970]: E1209 12:37:25.816775 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:37:26 crc kubenswrapper[4970]: E1209 12:37:26.815967 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:37:26 crc kubenswrapper[4970]: I1209 12:37:26.837219 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"ad7e46f490e04acfe3a1302091136ac653e4911c58628ac07f1e8fe09e09b651"} Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.050188 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-hmlhf"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.062402 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-ntn5d"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.077015 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-hmlhf"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.090992 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-9c61-account-create-update-q6929"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.104473 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7217-account-create-update-zcxd7"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.132936 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-ntn5d"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.151293 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-9c61-account-create-update-q6929"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.163943 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7217-account-create-update-zcxd7"] Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.833883 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144957ba-384e-40b0-88d5-17afeaaf3795" path="/var/lib/kubelet/pods/144957ba-384e-40b0-88d5-17afeaaf3795/volumes" Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.835206 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="252ca514-15c5-480d-a81e-d8230171c857" path="/var/lib/kubelet/pods/252ca514-15c5-480d-a81e-d8230171c857/volumes" Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.836191 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="617712c7-90b6-478c-a60a-0d830b8582ab" path="/var/lib/kubelet/pods/617712c7-90b6-478c-a60a-0d830b8582ab/volumes" Dec 09 12:37:27 crc kubenswrapper[4970]: I1209 12:37:27.836894 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7006e6ae-7748-4192-9001-3c29b208e763" path="/var/lib/kubelet/pods/7006e6ae-7748-4192-9001-3c29b208e763/volumes" Dec 09 12:37:29 crc kubenswrapper[4970]: I1209 12:37:29.033496 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c6rs9"] Dec 09 12:37:29 crc kubenswrapper[4970]: I1209 12:37:29.044465 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0bbf-account-create-update-n9d8q"] Dec 09 12:37:29 crc kubenswrapper[4970]: I1209 12:37:29.058994 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0bbf-account-create-update-n9d8q"] Dec 09 12:37:29 crc kubenswrapper[4970]: I1209 12:37:29.073108 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c6rs9"] Dec 09 12:37:29 crc kubenswrapper[4970]: I1209 12:37:29.830872 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816b8c67-0a22-47ae-a457-28928814a337" path="/var/lib/kubelet/pods/816b8c67-0a22-47ae-a457-28928814a337/volumes" Dec 09 12:37:29 crc kubenswrapper[4970]: I1209 12:37:29.832058 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a787c3f-0ec1-404c-abc0-c57508c7e5b9" path="/var/lib/kubelet/pods/8a787c3f-0ec1-404c-abc0-c57508c7e5b9/volumes" Dec 09 12:37:32 crc kubenswrapper[4970]: I1209 12:37:32.031822 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vqnvh"] Dec 09 12:37:32 crc kubenswrapper[4970]: I1209 12:37:32.045239 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vqnvh"] Dec 09 12:37:33 crc kubenswrapper[4970]: I1209 12:37:33.032439 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fd9db"] Dec 09 12:37:33 crc kubenswrapper[4970]: I1209 12:37:33.044993 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fd9db"] Dec 09 12:37:33 crc kubenswrapper[4970]: I1209 12:37:33.831352 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2b3fb48-ea66-4c6f-92ee-b8cd8b320296" path="/var/lib/kubelet/pods/c2b3fb48-ea66-4c6f-92ee-b8cd8b320296/volumes" Dec 09 12:37:33 crc kubenswrapper[4970]: I1209 12:37:33.832566 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9350a1c-3918-4895-8383-f7c306cb6063" path="/var/lib/kubelet/pods/c9350a1c-3918-4895-8383-f7c306cb6063/volumes" Dec 09 12:37:34 crc kubenswrapper[4970]: I1209 12:37:34.037987 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-375f-account-create-update-pbtc9"] Dec 09 12:37:34 crc kubenswrapper[4970]: I1209 12:37:34.054164 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7d82-account-create-update-sw6fr"] Dec 09 12:37:34 crc kubenswrapper[4970]: I1209 12:37:34.065511 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-375f-account-create-update-pbtc9"] Dec 09 12:37:34 crc kubenswrapper[4970]: I1209 12:37:34.080570 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7d82-account-create-update-sw6fr"] Dec 09 12:37:35 crc kubenswrapper[4970]: I1209 12:37:35.826556 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab" path="/var/lib/kubelet/pods/7dfafdd9-2928-4ca6-b298-fcb9eb37b9ab/volumes" Dec 09 12:37:35 crc kubenswrapper[4970]: I1209 12:37:35.827810 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c45310-fd46-4964-8cfe-e6ab7a1e1971" path="/var/lib/kubelet/pods/a9c45310-fd46-4964-8cfe-e6ab7a1e1971/volumes" Dec 09 12:37:36 crc kubenswrapper[4970]: I1209 12:37:36.904555 4970 scope.go:117] "RemoveContainer" containerID="b0c3198748e2b1e64966524283c61ac6803ed4b23106b49d19cfbe04ff1fecea" Dec 09 12:37:36 crc kubenswrapper[4970]: I1209 12:37:36.932212 4970 scope.go:117] "RemoveContainer" containerID="e0b315191296c8a259cf12f4879f7c8c9d3e7fdcadc5dc0c0e1e39984f065736" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.029108 4970 scope.go:117] "RemoveContainer" containerID="0f5af1b02e6cf04b0bdf9f490b9e4c5225a87adcf9ae4139879d97f844b851a9" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.077716 4970 scope.go:117] "RemoveContainer" containerID="add168fb3bf2e17ae7a79cf9c1335e131e30cef7b064a62f6cd08c3762af0dc6" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.138240 4970 scope.go:117] "RemoveContainer" containerID="91c1f28e4618e0ea4c1f3f3a480261000caeec79eba8ffe4c292d6bf007949d7" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.191285 4970 scope.go:117] "RemoveContainer" containerID="f401f182f065eb7abd925f590524d6803d5a4476a4bc1ed0664592e555037836" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.213484 4970 scope.go:117] "RemoveContainer" containerID="56eb3c45b89f81276c0e8d18da3cb7f5b0d3227b017713fbb3916be03cd8c361" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.268230 4970 scope.go:117] "RemoveContainer" containerID="a34fdffe67a6a0105ac052990234e37e7de8c08c0cbf6d3fedbc8560ec8fbb4f" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.294630 4970 scope.go:117] "RemoveContainer" containerID="79b1957bbff6cdd9db7085f8a847c79ad7b66987b1048f116172c8895ce907f5" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.313185 4970 scope.go:117] "RemoveContainer" containerID="d2df2d6f2561f63b243a7a9237eaf20c55c64327fb1f0c03131161b85cec8182" Dec 09 12:37:37 crc kubenswrapper[4970]: I1209 12:37:37.334031 4970 scope.go:117] "RemoveContainer" containerID="7a518d7d526ea14824f1a45242e7daf010d21cd4ac7aa7cbce17137cc19e8adf" Dec 09 12:37:37 crc kubenswrapper[4970]: E1209 12:37:37.826201 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.054794 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-15fb-account-create-update-nrf89"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.075384 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-69sxs"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.089716 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-15fb-account-create-update-nrf89"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.107977 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-3015-account-create-update-xs7kl"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.123138 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4118-account-create-update-bzvb2"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.140318 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-68fgf"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.156359 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-69sxs"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.167701 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-3015-account-create-update-xs7kl"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.183238 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4118-account-create-update-bzvb2"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.197237 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-618b-account-create-update-k6f7v"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.212927 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-j77r5"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.227333 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-68fgf"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.239742 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-67k7r"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.250674 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-j77r5"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.261806 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-618b-account-create-update-k6f7v"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.272428 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-67k7r"] Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.828724 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b640fc8-1ab1-4536-afe8-6b8c4c01c32a" path="/var/lib/kubelet/pods/2b640fc8-1ab1-4536-afe8-6b8c4c01c32a/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.829982 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e973b8-701c-4376-9b78-c575d37f901b" path="/var/lib/kubelet/pods/33e973b8-701c-4376-9b78-c575d37f901b/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.831083 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e1b5a6d-bae4-4e66-9b80-1a551d907036" path="/var/lib/kubelet/pods/6e1b5a6d-bae4-4e66-9b80-1a551d907036/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.832456 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883c54c6-9452-4305-94e9-13d5cefd22c8" path="/var/lib/kubelet/pods/883c54c6-9452-4305-94e9-13d5cefd22c8/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.834971 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="965b8097-10dd-4a96-b33b-a0b6a3d5e35f" path="/var/lib/kubelet/pods/965b8097-10dd-4a96-b33b-a0b6a3d5e35f/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.835798 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98ebedad-42a0-4ee4-ade1-fd913bd026d6" path="/var/lib/kubelet/pods/98ebedad-42a0-4ee4-ade1-fd913bd026d6/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.836565 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c2ad87-9313-4c2d-b691-0c907b91612a" path="/var/lib/kubelet/pods/c0c2ad87-9313-4c2d-b691-0c907b91612a/volumes" Dec 09 12:37:39 crc kubenswrapper[4970]: I1209 12:37:39.837912 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d91a3aef-64fa-4f87-a85f-64723d75894a" path="/var/lib/kubelet/pods/d91a3aef-64fa-4f87-a85f-64723d75894a/volumes" Dec 09 12:37:41 crc kubenswrapper[4970]: E1209 12:37:41.815139 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:37:50 crc kubenswrapper[4970]: E1209 12:37:50.815229 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:37:54 crc kubenswrapper[4970]: E1209 12:37:54.816207 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:38:02 crc kubenswrapper[4970]: I1209 12:38:02.053863 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-k4956"] Dec 09 12:38:02 crc kubenswrapper[4970]: I1209 12:38:02.068992 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-k4956"] Dec 09 12:38:03 crc kubenswrapper[4970]: I1209 12:38:03.827841 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a416bd4a-683a-43cf-867a-fb60427671a4" path="/var/lib/kubelet/pods/a416bd4a-683a-43cf-867a-fb60427671a4/volumes" Dec 09 12:38:03 crc kubenswrapper[4970]: E1209 12:38:03.919111 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:38:03 crc kubenswrapper[4970]: E1209 12:38:03.919183 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:38:03 crc kubenswrapper[4970]: E1209 12:38:03.919393 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:38:03 crc kubenswrapper[4970]: E1209 12:38:03.920577 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:38:08 crc kubenswrapper[4970]: E1209 12:38:08.815006 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:38:15 crc kubenswrapper[4970]: E1209 12:38:15.814848 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:38:22 crc kubenswrapper[4970]: E1209 12:38:22.941188 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:38:22 crc kubenswrapper[4970]: E1209 12:38:22.941896 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:38:22 crc kubenswrapper[4970]: E1209 12:38:22.942026 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:38:22 crc kubenswrapper[4970]: E1209 12:38:22.943210 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:38:30 crc kubenswrapper[4970]: E1209 12:38:30.814711 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:38:35 crc kubenswrapper[4970]: I1209 12:38:35.042530 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-qvmmx"] Dec 09 12:38:35 crc kubenswrapper[4970]: I1209 12:38:35.056338 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-qvmmx"] Dec 09 12:38:35 crc kubenswrapper[4970]: E1209 12:38:35.815105 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:38:35 crc kubenswrapper[4970]: I1209 12:38:35.826149 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de81c9d6-10bd-46d2-ab77-74463359dc5a" path="/var/lib/kubelet/pods/de81c9d6-10bd-46d2-ab77-74463359dc5a/volumes" Dec 09 12:38:37 crc kubenswrapper[4970]: I1209 12:38:37.572853 4970 scope.go:117] "RemoveContainer" containerID="9b0c8785223be68c19868333d78fb8be58ac176ad24bd768f7dba22251390577" Dec 09 12:38:37 crc kubenswrapper[4970]: I1209 12:38:37.603434 4970 scope.go:117] "RemoveContainer" containerID="a21ea59e5797cd0d4f50af32ab388990c7e45331a95bd1718b87235bb932ef3f" Dec 09 12:38:37 crc kubenswrapper[4970]: I1209 12:38:37.651919 4970 scope.go:117] "RemoveContainer" containerID="4b94b6459129954a1cb2c6e4fd36ce3dc3d56ffab0e838cd5aa32637d243acf1" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.184941 4970 scope.go:117] "RemoveContainer" containerID="fe47a908b5bd18e49ce18207ded9b53676917990d658fcdd7786aa3d96ae23d0" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.209985 4970 scope.go:117] "RemoveContainer" containerID="bab086407cb1e5638c53046c47ccb87af2364b0463595cc65951b194d6623114" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.267004 4970 scope.go:117] "RemoveContainer" containerID="458c6847e526a6c0c70552dd9f86d15eff7f762ee6d909197a05005ff05556da" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.325266 4970 scope.go:117] "RemoveContainer" containerID="a8b5a02f6c119357255405756cbad1a9bc2203cf0c1237a87424276af506ff74" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.344159 4970 scope.go:117] "RemoveContainer" containerID="d5014de01302e3ee549b56bb1e3bfbcfc669a6135c51ec2866bd62aabd776ad0" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.372382 4970 scope.go:117] "RemoveContainer" containerID="26d3fc06fd718f9f3455f3d045270ec24bfc9e7d2df0d3b08e80c1756fd605d8" Dec 09 12:38:38 crc kubenswrapper[4970]: I1209 12:38:38.400995 4970 scope.go:117] "RemoveContainer" containerID="42d40978701d662a5b06d3a0bfa2d40cfb6aea03e1c9781801e62e3e30ee344c" Dec 09 12:38:42 crc kubenswrapper[4970]: I1209 12:38:42.041434 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cltzd"] Dec 09 12:38:42 crc kubenswrapper[4970]: I1209 12:38:42.060644 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2bmkk"] Dec 09 12:38:42 crc kubenswrapper[4970]: I1209 12:38:42.071689 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cltzd"] Dec 09 12:38:42 crc kubenswrapper[4970]: I1209 12:38:42.083380 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2bmkk"] Dec 09 12:38:42 crc kubenswrapper[4970]: E1209 12:38:42.815289 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:38:43 crc kubenswrapper[4970]: I1209 12:38:43.847275 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="027edf40-c863-46a8-8950-322f56db87d3" path="/var/lib/kubelet/pods/027edf40-c863-46a8-8950-322f56db87d3/volumes" Dec 09 12:38:43 crc kubenswrapper[4970]: I1209 12:38:43.849694 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222db933-1bf5-4df0-aa84-362453b9ba35" path="/var/lib/kubelet/pods/222db933-1bf5-4df0-aa84-362453b9ba35/volumes" Dec 09 12:38:47 crc kubenswrapper[4970]: I1209 12:38:47.029917 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4kwdc"] Dec 09 12:38:47 crc kubenswrapper[4970]: I1209 12:38:47.044065 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4kwdc"] Dec 09 12:38:47 crc kubenswrapper[4970]: E1209 12:38:47.822559 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:38:47 crc kubenswrapper[4970]: I1209 12:38:47.828067 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41a40ba0-3b8c-4068-9c2c-1e07dfba18a4" path="/var/lib/kubelet/pods/41a40ba0-3b8c-4068-9c2c-1e07dfba18a4/volumes" Dec 09 12:38:56 crc kubenswrapper[4970]: E1209 12:38:56.814535 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:39:00 crc kubenswrapper[4970]: E1209 12:39:00.815569 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:39:03 crc kubenswrapper[4970]: I1209 12:39:03.048562 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-rxs6p"] Dec 09 12:39:03 crc kubenswrapper[4970]: I1209 12:39:03.062307 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-rxs6p"] Dec 09 12:39:03 crc kubenswrapper[4970]: I1209 12:39:03.826689 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973c0567-09c0-4313-8c9f-ee74a3188226" path="/var/lib/kubelet/pods/973c0567-09c0-4313-8c9f-ee74a3188226/volumes" Dec 09 12:39:05 crc kubenswrapper[4970]: I1209 12:39:05.035520 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-849mw"] Dec 09 12:39:05 crc kubenswrapper[4970]: I1209 12:39:05.046432 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-849mw"] Dec 09 12:39:05 crc kubenswrapper[4970]: I1209 12:39:05.824614 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43ba192b-df87-4028-a33e-4ff96d287644" path="/var/lib/kubelet/pods/43ba192b-df87-4028-a33e-4ff96d287644/volumes" Dec 09 12:39:11 crc kubenswrapper[4970]: E1209 12:39:11.815322 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:39:14 crc kubenswrapper[4970]: E1209 12:39:14.815657 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:39:25 crc kubenswrapper[4970]: E1209 12:39:25.816159 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:39:27 crc kubenswrapper[4970]: E1209 12:39:27.822468 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:39:37 crc kubenswrapper[4970]: E1209 12:39:37.822679 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.618944 4970 scope.go:117] "RemoveContainer" containerID="9b1fbde0d51b59018f433c7d4c1f3bc8089323c52dcb428fb5cc223de8f43107" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.664051 4970 scope.go:117] "RemoveContainer" containerID="d8beb4d4908bb9c21bcaddb9d73797d27da82ff3c5b23cf352fd6c416056603c" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.737724 4970 scope.go:117] "RemoveContainer" containerID="b71736d4c254616e4f33e65c60cb89829cf4499185a1a0679c62892db57458fe" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.795498 4970 scope.go:117] "RemoveContainer" containerID="20001436ea007dee95f34238e1d1cdaa96eef1b75c0520dd4a1fb3b08b17a534" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.856666 4970 scope.go:117] "RemoveContainer" containerID="95fe8f1182a7d436082fc9b369ff41ea9510031781f8713069122c5a6b30371a" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.894023 4970 scope.go:117] "RemoveContainer" containerID="9c1cf379934d31e88480a5ea3214360b781f8a8077693ee79346b6b851871483" Dec 09 12:39:38 crc kubenswrapper[4970]: I1209 12:39:38.932370 4970 scope.go:117] "RemoveContainer" containerID="b71341b506c6fde3668d225099185895891e9697836992f67b55dd78d5cbf49b" Dec 09 12:39:39 crc kubenswrapper[4970]: E1209 12:39:39.815209 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:39:46 crc kubenswrapper[4970]: I1209 12:39:46.010644 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:39:46 crc kubenswrapper[4970]: I1209 12:39:46.011110 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:39:49 crc kubenswrapper[4970]: E1209 12:39:49.815450 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:39:51 crc kubenswrapper[4970]: E1209 12:39:51.815749 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:40:01 crc kubenswrapper[4970]: I1209 12:40:01.060524 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-3084-account-create-update-jgbjt"] Dec 09 12:40:01 crc kubenswrapper[4970]: I1209 12:40:01.070994 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-3084-account-create-update-jgbjt"] Dec 09 12:40:01 crc kubenswrapper[4970]: E1209 12:40:01.815227 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:40:01 crc kubenswrapper[4970]: I1209 12:40:01.833602 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9e3f275-d39e-4777-a04a-ce4b2a642952" path="/var/lib/kubelet/pods/b9e3f275-d39e-4777-a04a-ce4b2a642952/volumes" Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.043111 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-z8cw7"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.055860 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-f4mbj"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.068779 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-ce27-account-create-update-5mndj"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.078451 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-thg22"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.088405 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-ce27-account-create-update-5mndj"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.098795 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-z8cw7"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.108680 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-f4mbj"] Dec 09 12:40:02 crc kubenswrapper[4970]: I1209 12:40:02.118273 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-thg22"] Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.028817 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c370-account-create-update-kwdtv"] Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.046830 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c370-account-create-update-kwdtv"] Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.828290 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213b2a4a-6575-4462-8637-09491c390553" path="/var/lib/kubelet/pods/213b2a4a-6575-4462-8637-09491c390553/volumes" Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.830530 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="277c3243-bbe4-436e-a850-3619bcecc42a" path="/var/lib/kubelet/pods/277c3243-bbe4-436e-a850-3619bcecc42a/volumes" Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.831401 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5375400c-350f-41f8-83ff-94071d6cc869" path="/var/lib/kubelet/pods/5375400c-350f-41f8-83ff-94071d6cc869/volumes" Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.832083 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17" path="/var/lib/kubelet/pods/87f97ec0-b4d2-4a39-9fc2-dbe1a5791c17/volumes" Dec 09 12:40:03 crc kubenswrapper[4970]: I1209 12:40:03.833295 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6427604-e1a0-4853-bd35-71a69164978f" path="/var/lib/kubelet/pods/f6427604-e1a0-4853-bd35-71a69164978f/volumes" Dec 09 12:40:05 crc kubenswrapper[4970]: E1209 12:40:05.814990 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:40:15 crc kubenswrapper[4970]: E1209 12:40:15.817074 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:40:16 crc kubenswrapper[4970]: I1209 12:40:16.011369 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:40:16 crc kubenswrapper[4970]: I1209 12:40:16.011458 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:40:20 crc kubenswrapper[4970]: E1209 12:40:20.817783 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:40:28 crc kubenswrapper[4970]: E1209 12:40:28.814928 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:40:31 crc kubenswrapper[4970]: E1209 12:40:31.816726 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:40:37 crc kubenswrapper[4970]: I1209 12:40:37.058690 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8kxt"] Dec 09 12:40:37 crc kubenswrapper[4970]: I1209 12:40:37.071312 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-m8kxt"] Dec 09 12:40:37 crc kubenswrapper[4970]: I1209 12:40:37.824292 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92e5909b-9f51-4a80-824b-b633efbed63b" path="/var/lib/kubelet/pods/92e5909b-9f51-4a80-824b-b633efbed63b/volumes" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.109371 4970 scope.go:117] "RemoveContainer" containerID="f93fccb5021fdf96c86298a57962ad656b1caa09ca0aaa21f850a136d35da43b" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.153447 4970 scope.go:117] "RemoveContainer" containerID="bf9fbe2859747fd62421d9f31218e353518c50dd55e1fd78f13ec74a0ec726b9" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.206775 4970 scope.go:117] "RemoveContainer" containerID="2f7ff6aa9f0e389598fdafe9691fe6bdab04c3382bccad80ae1b2124230428d5" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.276054 4970 scope.go:117] "RemoveContainer" containerID="3d2f114427901867ba62faf64317031624151299301e22f9e5a44dd5c6fc399e" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.315195 4970 scope.go:117] "RemoveContainer" containerID="8e7e02f2a80fc8868e889e1a05bfc91d9ff57befb9874c28d33f3be135b24f8d" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.367403 4970 scope.go:117] "RemoveContainer" containerID="84fbb57175c0d463209d0a524372b3c1c0af13759723ff20c3f048333747168f" Dec 09 12:40:39 crc kubenswrapper[4970]: I1209 12:40:39.421153 4970 scope.go:117] "RemoveContainer" containerID="d1cd7182f2627bd3a1de015d7f9bba4b971edd55e52e9a52987550cb0669d194" Dec 09 12:40:39 crc kubenswrapper[4970]: E1209 12:40:39.814508 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:40:43 crc kubenswrapper[4970]: E1209 12:40:43.814534 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.011067 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.011570 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.011621 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.012852 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ad7e46f490e04acfe3a1302091136ac653e4911c58628ac07f1e8fe09e09b651"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.012946 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://ad7e46f490e04acfe3a1302091136ac653e4911c58628ac07f1e8fe09e09b651" gracePeriod=600 Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.966928 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="ad7e46f490e04acfe3a1302091136ac653e4911c58628ac07f1e8fe09e09b651" exitCode=0 Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.966977 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"ad7e46f490e04acfe3a1302091136ac653e4911c58628ac07f1e8fe09e09b651"} Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.967008 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70"} Dec 09 12:40:46 crc kubenswrapper[4970]: I1209 12:40:46.967028 4970 scope.go:117] "RemoveContainer" containerID="7fd6f109e392e855fb1d78959e8fc5739de01f58f1c3cbb8ffead2654c7f16b5" Dec 09 12:40:52 crc kubenswrapper[4970]: E1209 12:40:52.815191 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:40:55 crc kubenswrapper[4970]: E1209 12:40:55.815855 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:41:02 crc kubenswrapper[4970]: I1209 12:41:02.044664 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-5tqp5"] Dec 09 12:41:02 crc kubenswrapper[4970]: I1209 12:41:02.056088 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-xx4x2"] Dec 09 12:41:02 crc kubenswrapper[4970]: I1209 12:41:02.068304 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-5tqp5"] Dec 09 12:41:02 crc kubenswrapper[4970]: I1209 12:41:02.078192 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-xx4x2"] Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.039654 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgscz"] Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.053317 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dgscz"] Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.066485 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-7d90-account-create-update-llgqt"] Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.077888 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-7d90-account-create-update-llgqt"] Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.829487 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3430931f-908e-4e62-8711-e8e8e73d9334" path="/var/lib/kubelet/pods/3430931f-908e-4e62-8711-e8e8e73d9334/volumes" Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.830285 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="854860ea-b756-4edd-88ea-6f1ad333f7bc" path="/var/lib/kubelet/pods/854860ea-b756-4edd-88ea-6f1ad333f7bc/volumes" Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.831015 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ed63438-7cd6-4d2f-b591-2e5a7c74a94b" path="/var/lib/kubelet/pods/8ed63438-7cd6-4d2f-b591-2e5a7c74a94b/volumes" Dec 09 12:41:03 crc kubenswrapper[4970]: I1209 12:41:03.832211 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23be82c-e164-4128-9b65-dd173e1db58b" path="/var/lib/kubelet/pods/b23be82c-e164-4128-9b65-dd173e1db58b/volumes" Dec 09 12:41:05 crc kubenswrapper[4970]: E1209 12:41:05.815853 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:41:10 crc kubenswrapper[4970]: E1209 12:41:10.817554 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:41:16 crc kubenswrapper[4970]: E1209 12:41:16.816994 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:41:21 crc kubenswrapper[4970]: I1209 12:41:21.052031 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-mjrlc"] Dec 09 12:41:21 crc kubenswrapper[4970]: I1209 12:41:21.067979 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-mjrlc"] Dec 09 12:41:21 crc kubenswrapper[4970]: I1209 12:41:21.826752 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a3105b-e980-44c6-bb0d-a8db895867ee" path="/var/lib/kubelet/pods/48a3105b-e980-44c6-bb0d-a8db895867ee/volumes" Dec 09 12:41:23 crc kubenswrapper[4970]: E1209 12:41:23.814794 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:41:31 crc kubenswrapper[4970]: E1209 12:41:31.817131 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.507473 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9ln"] Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.510630 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.524409 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9ln"] Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.617281 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcgh\" (UniqueName: \"kubernetes.io/projected/92b296b7-1d65-4526-97e9-4738cc459db5-kube-api-access-wkcgh\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.617349 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-catalog-content\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.617540 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-utilities\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.719575 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-utilities\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.719839 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcgh\" (UniqueName: \"kubernetes.io/projected/92b296b7-1d65-4526-97e9-4738cc459db5-kube-api-access-wkcgh\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.719900 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-catalog-content\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.720270 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-utilities\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.720535 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-catalog-content\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.740278 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcgh\" (UniqueName: \"kubernetes.io/projected/92b296b7-1d65-4526-97e9-4738cc459db5-kube-api-access-wkcgh\") pod \"redhat-marketplace-fx9ln\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:38 crc kubenswrapper[4970]: E1209 12:41:38.814041 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:41:38 crc kubenswrapper[4970]: I1209 12:41:38.838880 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:39 crc kubenswrapper[4970]: I1209 12:41:39.591749 4970 scope.go:117] "RemoveContainer" containerID="9b9df0970da1affe9c1b1f6c1c0086bbcad110199673039f43bc7eedd8e3f436" Dec 09 12:41:40 crc kubenswrapper[4970]: I1209 12:41:40.375942 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9ln"] Dec 09 12:41:40 crc kubenswrapper[4970]: I1209 12:41:40.384422 4970 scope.go:117] "RemoveContainer" containerID="65159f9d640cbebd055aa85c5fedd9135fdbea41d060e380f315b07df21c9d56" Dec 09 12:41:40 crc kubenswrapper[4970]: I1209 12:41:40.468831 4970 scope.go:117] "RemoveContainer" containerID="7ac67b7728e2e1ecb824af3fa21339f8b5a1397f3fb341ba9a20469473e853c6" Dec 09 12:41:40 crc kubenswrapper[4970]: I1209 12:41:40.591497 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9ln" event={"ID":"92b296b7-1d65-4526-97e9-4738cc459db5","Type":"ContainerStarted","Data":"588caac8714099760974a384dcb9586febc24f4e4a41d4c2aaace64e3cd4a5c7"} Dec 09 12:41:40 crc kubenswrapper[4970]: I1209 12:41:40.669023 4970 scope.go:117] "RemoveContainer" containerID="9e665c463f13323cda9ea7acc65f175a47e4aec7431f7f9462a89b721e70e4e0" Dec 09 12:41:40 crc kubenswrapper[4970]: I1209 12:41:40.747671 4970 scope.go:117] "RemoveContainer" containerID="a8a42395f9dfb4c52018236db650334d890dacc088fadb7d03e14fd76ca0a48b" Dec 09 12:41:41 crc kubenswrapper[4970]: I1209 12:41:41.603063 4970 generic.go:334] "Generic (PLEG): container finished" podID="92b296b7-1d65-4526-97e9-4738cc459db5" containerID="82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d" exitCode=0 Dec 09 12:41:41 crc kubenswrapper[4970]: I1209 12:41:41.603121 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9ln" event={"ID":"92b296b7-1d65-4526-97e9-4738cc459db5","Type":"ContainerDied","Data":"82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d"} Dec 09 12:41:43 crc kubenswrapper[4970]: I1209 12:41:43.622592 4970 generic.go:334] "Generic (PLEG): container finished" podID="92b296b7-1d65-4526-97e9-4738cc459db5" containerID="840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066" exitCode=0 Dec 09 12:41:43 crc kubenswrapper[4970]: I1209 12:41:43.622695 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9ln" event={"ID":"92b296b7-1d65-4526-97e9-4738cc459db5","Type":"ContainerDied","Data":"840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066"} Dec 09 12:41:44 crc kubenswrapper[4970]: I1209 12:41:44.642899 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9ln" event={"ID":"92b296b7-1d65-4526-97e9-4738cc459db5","Type":"ContainerStarted","Data":"ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55"} Dec 09 12:41:44 crc kubenswrapper[4970]: I1209 12:41:44.674869 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fx9ln" podStartSLOduration=4.193319525 podStartE2EDuration="6.674846574s" podCreationTimestamp="2025-12-09 12:41:38 +0000 UTC" firstStartedPulling="2025-12-09 12:41:41.605540527 +0000 UTC m=+2114.166021578" lastFinishedPulling="2025-12-09 12:41:44.087067586 +0000 UTC m=+2116.647548627" observedRunningTime="2025-12-09 12:41:44.664619098 +0000 UTC m=+2117.225100159" watchObservedRunningTime="2025-12-09 12:41:44.674846574 +0000 UTC m=+2117.235327635" Dec 09 12:41:44 crc kubenswrapper[4970]: E1209 12:41:44.814947 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:41:46 crc kubenswrapper[4970]: I1209 12:41:46.032153 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-jxqtz"] Dec 09 12:41:46 crc kubenswrapper[4970]: I1209 12:41:46.067881 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-jxqtz"] Dec 09 12:41:47 crc kubenswrapper[4970]: I1209 12:41:47.824993 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68af4d13-2a15-420f-84b9-a0ebec93ac59" path="/var/lib/kubelet/pods/68af4d13-2a15-420f-84b9-a0ebec93ac59/volumes" Dec 09 12:41:48 crc kubenswrapper[4970]: I1209 12:41:48.840011 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:48 crc kubenswrapper[4970]: I1209 12:41:48.841595 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:48 crc kubenswrapper[4970]: I1209 12:41:48.886342 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:49 crc kubenswrapper[4970]: I1209 12:41:49.744813 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:49 crc kubenswrapper[4970]: I1209 12:41:49.803786 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9ln"] Dec 09 12:41:51 crc kubenswrapper[4970]: I1209 12:41:51.711531 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fx9ln" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="registry-server" containerID="cri-o://ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55" gracePeriod=2 Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.288331 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.353331 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcgh\" (UniqueName: \"kubernetes.io/projected/92b296b7-1d65-4526-97e9-4738cc459db5-kube-api-access-wkcgh\") pod \"92b296b7-1d65-4526-97e9-4738cc459db5\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.353790 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-utilities\") pod \"92b296b7-1d65-4526-97e9-4738cc459db5\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.354107 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-catalog-content\") pod \"92b296b7-1d65-4526-97e9-4738cc459db5\" (UID: \"92b296b7-1d65-4526-97e9-4738cc459db5\") " Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.357549 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-utilities" (OuterVolumeSpecName: "utilities") pod "92b296b7-1d65-4526-97e9-4738cc459db5" (UID: "92b296b7-1d65-4526-97e9-4738cc459db5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.365614 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92b296b7-1d65-4526-97e9-4738cc459db5-kube-api-access-wkcgh" (OuterVolumeSpecName: "kube-api-access-wkcgh") pod "92b296b7-1d65-4526-97e9-4738cc459db5" (UID: "92b296b7-1d65-4526-97e9-4738cc459db5"). InnerVolumeSpecName "kube-api-access-wkcgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.401644 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92b296b7-1d65-4526-97e9-4738cc459db5" (UID: "92b296b7-1d65-4526-97e9-4738cc459db5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.456788 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.457012 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcgh\" (UniqueName: \"kubernetes.io/projected/92b296b7-1d65-4526-97e9-4738cc459db5-kube-api-access-wkcgh\") on node \"crc\" DevicePath \"\"" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.457073 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92b296b7-1d65-4526-97e9-4738cc459db5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.723875 4970 generic.go:334] "Generic (PLEG): container finished" podID="92b296b7-1d65-4526-97e9-4738cc459db5" containerID="ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55" exitCode=0 Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.723917 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9ln" event={"ID":"92b296b7-1d65-4526-97e9-4738cc459db5","Type":"ContainerDied","Data":"ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55"} Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.723937 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fx9ln" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.723950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fx9ln" event={"ID":"92b296b7-1d65-4526-97e9-4738cc459db5","Type":"ContainerDied","Data":"588caac8714099760974a384dcb9586febc24f4e4a41d4c2aaace64e3cd4a5c7"} Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.723970 4970 scope.go:117] "RemoveContainer" containerID="ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.744658 4970 scope.go:117] "RemoveContainer" containerID="840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.771872 4970 scope.go:117] "RemoveContainer" containerID="82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.778288 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9ln"] Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.790433 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fx9ln"] Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.827469 4970 scope.go:117] "RemoveContainer" containerID="ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55" Dec 09 12:41:52 crc kubenswrapper[4970]: E1209 12:41:52.827867 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55\": container with ID starting with ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55 not found: ID does not exist" containerID="ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.827909 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55"} err="failed to get container status \"ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55\": rpc error: code = NotFound desc = could not find container \"ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55\": container with ID starting with ce5755545af02f9ce92efe80ccb0fb72b88e711532d8085d1a548bb7b283bd55 not found: ID does not exist" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.827935 4970 scope.go:117] "RemoveContainer" containerID="840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066" Dec 09 12:41:52 crc kubenswrapper[4970]: E1209 12:41:52.828377 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066\": container with ID starting with 840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066 not found: ID does not exist" containerID="840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.828409 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066"} err="failed to get container status \"840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066\": rpc error: code = NotFound desc = could not find container \"840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066\": container with ID starting with 840a9b55b1e4d2ed28d97edcb026b2c8657eabb1aa05492f68e487db64857066 not found: ID does not exist" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.828430 4970 scope.go:117] "RemoveContainer" containerID="82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d" Dec 09 12:41:52 crc kubenswrapper[4970]: E1209 12:41:52.831327 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d\": container with ID starting with 82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d not found: ID does not exist" containerID="82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d" Dec 09 12:41:52 crc kubenswrapper[4970]: I1209 12:41:52.831368 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d"} err="failed to get container status \"82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d\": rpc error: code = NotFound desc = could not find container \"82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d\": container with ID starting with 82fd9441419379845c0fe7d6a3ac40feba844ff7a8e3f606853b8c2b410ee77d not found: ID does not exist" Dec 09 12:41:53 crc kubenswrapper[4970]: E1209 12:41:53.814439 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:41:53 crc kubenswrapper[4970]: I1209 12:41:53.824641 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" path="/var/lib/kubelet/pods/92b296b7-1d65-4526-97e9-4738cc459db5/volumes" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.078363 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xm2g5"] Dec 09 12:41:55 crc kubenswrapper[4970]: E1209 12:41:55.079260 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="extract-utilities" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.079276 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="extract-utilities" Dec 09 12:41:55 crc kubenswrapper[4970]: E1209 12:41:55.079292 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="extract-content" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.079298 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="extract-content" Dec 09 12:41:55 crc kubenswrapper[4970]: E1209 12:41:55.079342 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="registry-server" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.079350 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="registry-server" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.079576 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="92b296b7-1d65-4526-97e9-4738cc459db5" containerName="registry-server" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.087659 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.092091 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xm2g5"] Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.242388 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-utilities\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.242866 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-catalog-content\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.242940 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgk4\" (UniqueName: \"kubernetes.io/projected/fa9529c4-3904-4691-a0f8-9b9f988499c8-kube-api-access-xzgk4\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.345055 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-catalog-content\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.345119 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgk4\" (UniqueName: \"kubernetes.io/projected/fa9529c4-3904-4691-a0f8-9b9f988499c8-kube-api-access-xzgk4\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.345457 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-utilities\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.345659 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-catalog-content\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.345875 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-utilities\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.366655 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgk4\" (UniqueName: \"kubernetes.io/projected/fa9529c4-3904-4691-a0f8-9b9f988499c8-kube-api-access-xzgk4\") pod \"redhat-operators-xm2g5\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.409275 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:41:55 crc kubenswrapper[4970]: I1209 12:41:55.948339 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xm2g5"] Dec 09 12:41:56 crc kubenswrapper[4970]: I1209 12:41:56.767836 4970 generic.go:334] "Generic (PLEG): container finished" podID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerID="8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8" exitCode=0 Dec 09 12:41:56 crc kubenswrapper[4970]: I1209 12:41:56.767951 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerDied","Data":"8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8"} Dec 09 12:41:56 crc kubenswrapper[4970]: I1209 12:41:56.768146 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerStarted","Data":"c5fdc85e7936f42aad148cd4ed96820b4425310154cbf4c351bdb9b7f56f1ff7"} Dec 09 12:41:57 crc kubenswrapper[4970]: I1209 12:41:57.779775 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerStarted","Data":"6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202"} Dec 09 12:41:58 crc kubenswrapper[4970]: E1209 12:41:58.820776 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:42:01 crc kubenswrapper[4970]: I1209 12:42:01.844338 4970 generic.go:334] "Generic (PLEG): container finished" podID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerID="6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202" exitCode=0 Dec 09 12:42:01 crc kubenswrapper[4970]: I1209 12:42:01.847512 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerDied","Data":"6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202"} Dec 09 12:42:02 crc kubenswrapper[4970]: I1209 12:42:02.855754 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerStarted","Data":"9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a"} Dec 09 12:42:02 crc kubenswrapper[4970]: I1209 12:42:02.875163 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xm2g5" podStartSLOduration=2.401712228 podStartE2EDuration="7.875145711s" podCreationTimestamp="2025-12-09 12:41:55 +0000 UTC" firstStartedPulling="2025-12-09 12:41:56.769959951 +0000 UTC m=+2129.330441002" lastFinishedPulling="2025-12-09 12:42:02.243393434 +0000 UTC m=+2134.803874485" observedRunningTime="2025-12-09 12:42:02.871815881 +0000 UTC m=+2135.432296942" watchObservedRunningTime="2025-12-09 12:42:02.875145711 +0000 UTC m=+2135.435626762" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.151683 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqcbj"] Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.155590 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.190769 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqcbj"] Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.286658 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-catalog-content\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.286881 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt8m9\" (UniqueName: \"kubernetes.io/projected/76534e73-ef3c-4b51-8136-768fdc2e4275-kube-api-access-mt8m9\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.286972 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-utilities\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.388654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt8m9\" (UniqueName: \"kubernetes.io/projected/76534e73-ef3c-4b51-8136-768fdc2e4275-kube-api-access-mt8m9\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.388778 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-utilities\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.389284 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-utilities\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.389398 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-catalog-content\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.389627 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-catalog-content\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.410631 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt8m9\" (UniqueName: \"kubernetes.io/projected/76534e73-ef3c-4b51-8136-768fdc2e4275-kube-api-access-mt8m9\") pod \"certified-operators-jqcbj\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:04 crc kubenswrapper[4970]: I1209 12:42:04.489589 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:05 crc kubenswrapper[4970]: I1209 12:42:05.077946 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqcbj"] Dec 09 12:42:05 crc kubenswrapper[4970]: W1209 12:42:05.083560 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76534e73_ef3c_4b51_8136_768fdc2e4275.slice/crio-44eeb00be7cb2c81797f7963c0bd83e21eb070db9b6756251ca3c89ee08a3469 WatchSource:0}: Error finding container 44eeb00be7cb2c81797f7963c0bd83e21eb070db9b6756251ca3c89ee08a3469: Status 404 returned error can't find the container with id 44eeb00be7cb2c81797f7963c0bd83e21eb070db9b6756251ca3c89ee08a3469 Dec 09 12:42:05 crc kubenswrapper[4970]: I1209 12:42:05.409399 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:42:05 crc kubenswrapper[4970]: I1209 12:42:05.409776 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:42:05 crc kubenswrapper[4970]: I1209 12:42:05.897031 4970 generic.go:334] "Generic (PLEG): container finished" podID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerID="fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3" exitCode=0 Dec 09 12:42:05 crc kubenswrapper[4970]: I1209 12:42:05.897359 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerDied","Data":"fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3"} Dec 09 12:42:05 crc kubenswrapper[4970]: I1209 12:42:05.897391 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerStarted","Data":"44eeb00be7cb2c81797f7963c0bd83e21eb070db9b6756251ca3c89ee08a3469"} Dec 09 12:42:06 crc kubenswrapper[4970]: I1209 12:42:06.468546 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xm2g5" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="registry-server" probeResult="failure" output=< Dec 09 12:42:06 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:42:06 crc kubenswrapper[4970]: > Dec 09 12:42:06 crc kubenswrapper[4970]: I1209 12:42:06.915203 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerStarted","Data":"2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50"} Dec 09 12:42:08 crc kubenswrapper[4970]: E1209 12:42:08.816021 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:42:08 crc kubenswrapper[4970]: I1209 12:42:08.936673 4970 generic.go:334] "Generic (PLEG): container finished" podID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerID="2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50" exitCode=0 Dec 09 12:42:08 crc kubenswrapper[4970]: I1209 12:42:08.936768 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerDied","Data":"2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50"} Dec 09 12:42:09 crc kubenswrapper[4970]: I1209 12:42:09.948511 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerStarted","Data":"5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2"} Dec 09 12:42:09 crc kubenswrapper[4970]: I1209 12:42:09.968414 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqcbj" podStartSLOduration=2.555633328 podStartE2EDuration="5.968392299s" podCreationTimestamp="2025-12-09 12:42:04 +0000 UTC" firstStartedPulling="2025-12-09 12:42:05.900864519 +0000 UTC m=+2138.461345570" lastFinishedPulling="2025-12-09 12:42:09.3136235 +0000 UTC m=+2141.874104541" observedRunningTime="2025-12-09 12:42:09.96547412 +0000 UTC m=+2142.525955171" watchObservedRunningTime="2025-12-09 12:42:09.968392299 +0000 UTC m=+2142.528873350" Dec 09 12:42:12 crc kubenswrapper[4970]: E1209 12:42:12.815149 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:42:14 crc kubenswrapper[4970]: I1209 12:42:14.490018 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:14 crc kubenswrapper[4970]: I1209 12:42:14.490380 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:14 crc kubenswrapper[4970]: I1209 12:42:14.554384 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:15 crc kubenswrapper[4970]: I1209 12:42:15.051971 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:15 crc kubenswrapper[4970]: I1209 12:42:15.105616 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqcbj"] Dec 09 12:42:15 crc kubenswrapper[4970]: I1209 12:42:15.466107 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:42:15 crc kubenswrapper[4970]: I1209 12:42:15.520083 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.017680 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqcbj" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="registry-server" containerID="cri-o://5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2" gracePeriod=2 Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.195671 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xm2g5"] Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.195947 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xm2g5" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="registry-server" containerID="cri-o://9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a" gracePeriod=2 Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.629826 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.736281 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-utilities\") pod \"76534e73-ef3c-4b51-8136-768fdc2e4275\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.736599 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt8m9\" (UniqueName: \"kubernetes.io/projected/76534e73-ef3c-4b51-8136-768fdc2e4275-kube-api-access-mt8m9\") pod \"76534e73-ef3c-4b51-8136-768fdc2e4275\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.736853 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-catalog-content\") pod \"76534e73-ef3c-4b51-8136-768fdc2e4275\" (UID: \"76534e73-ef3c-4b51-8136-768fdc2e4275\") " Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.737160 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-utilities" (OuterVolumeSpecName: "utilities") pod "76534e73-ef3c-4b51-8136-768fdc2e4275" (UID: "76534e73-ef3c-4b51-8136-768fdc2e4275"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.737696 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.742645 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76534e73-ef3c-4b51-8136-768fdc2e4275-kube-api-access-mt8m9" (OuterVolumeSpecName: "kube-api-access-mt8m9") pod "76534e73-ef3c-4b51-8136-768fdc2e4275" (UID: "76534e73-ef3c-4b51-8136-768fdc2e4275"). InnerVolumeSpecName "kube-api-access-mt8m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.785610 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.794767 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76534e73-ef3c-4b51-8136-768fdc2e4275" (UID: "76534e73-ef3c-4b51-8136-768fdc2e4275"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.839072 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-catalog-content\") pod \"fa9529c4-3904-4691-a0f8-9b9f988499c8\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.839141 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-utilities\") pod \"fa9529c4-3904-4691-a0f8-9b9f988499c8\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.839204 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzgk4\" (UniqueName: \"kubernetes.io/projected/fa9529c4-3904-4691-a0f8-9b9f988499c8-kube-api-access-xzgk4\") pod \"fa9529c4-3904-4691-a0f8-9b9f988499c8\" (UID: \"fa9529c4-3904-4691-a0f8-9b9f988499c8\") " Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.839858 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-utilities" (OuterVolumeSpecName: "utilities") pod "fa9529c4-3904-4691-a0f8-9b9f988499c8" (UID: "fa9529c4-3904-4691-a0f8-9b9f988499c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.840369 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt8m9\" (UniqueName: \"kubernetes.io/projected/76534e73-ef3c-4b51-8136-768fdc2e4275-kube-api-access-mt8m9\") on node \"crc\" DevicePath \"\"" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.840463 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76534e73-ef3c-4b51-8136-768fdc2e4275-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.840530 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.843616 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa9529c4-3904-4691-a0f8-9b9f988499c8-kube-api-access-xzgk4" (OuterVolumeSpecName: "kube-api-access-xzgk4") pod "fa9529c4-3904-4691-a0f8-9b9f988499c8" (UID: "fa9529c4-3904-4691-a0f8-9b9f988499c8"). InnerVolumeSpecName "kube-api-access-xzgk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.943026 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzgk4\" (UniqueName: \"kubernetes.io/projected/fa9529c4-3904-4691-a0f8-9b9f988499c8-kube-api-access-xzgk4\") on node \"crc\" DevicePath \"\"" Dec 09 12:42:17 crc kubenswrapper[4970]: I1209 12:42:17.943887 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa9529c4-3904-4691-a0f8-9b9f988499c8" (UID: "fa9529c4-3904-4691-a0f8-9b9f988499c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.031635 4970 generic.go:334] "Generic (PLEG): container finished" podID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerID="9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a" exitCode=0 Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.031696 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xm2g5" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.031704 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerDied","Data":"9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a"} Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.031773 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xm2g5" event={"ID":"fa9529c4-3904-4691-a0f8-9b9f988499c8","Type":"ContainerDied","Data":"c5fdc85e7936f42aad148cd4ed96820b4425310154cbf4c351bdb9b7f56f1ff7"} Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.031793 4970 scope.go:117] "RemoveContainer" containerID="9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.034151 4970 generic.go:334] "Generic (PLEG): container finished" podID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerID="5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2" exitCode=0 Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.034194 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerDied","Data":"5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2"} Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.034224 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqcbj" event={"ID":"76534e73-ef3c-4b51-8136-768fdc2e4275","Type":"ContainerDied","Data":"44eeb00be7cb2c81797f7963c0bd83e21eb070db9b6756251ca3c89ee08a3469"} Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.034226 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqcbj" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.045519 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa9529c4-3904-4691-a0f8-9b9f988499c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.062610 4970 scope.go:117] "RemoveContainer" containerID="6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.074817 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xm2g5"] Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.104656 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xm2g5"] Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.108509 4970 scope.go:117] "RemoveContainer" containerID="8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.116723 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqcbj"] Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.127557 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqcbj"] Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.137286 4970 scope.go:117] "RemoveContainer" containerID="9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a" Dec 09 12:42:18 crc kubenswrapper[4970]: E1209 12:42:18.142859 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a\": container with ID starting with 9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a not found: ID does not exist" containerID="9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.142906 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a"} err="failed to get container status \"9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a\": rpc error: code = NotFound desc = could not find container \"9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a\": container with ID starting with 9cbe827c9437f2436a1c4eaa0696f981b1a8dfb0d02f840f86619f5a87e5a23a not found: ID does not exist" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.142937 4970 scope.go:117] "RemoveContainer" containerID="6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202" Dec 09 12:42:18 crc kubenswrapper[4970]: E1209 12:42:18.143393 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202\": container with ID starting with 6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202 not found: ID does not exist" containerID="6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.143417 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202"} err="failed to get container status \"6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202\": rpc error: code = NotFound desc = could not find container \"6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202\": container with ID starting with 6e5e03cbacca10846a89280f4a7b80953d8e1d646fb5c8342740ec252b5fe202 not found: ID does not exist" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.143442 4970 scope.go:117] "RemoveContainer" containerID="8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8" Dec 09 12:42:18 crc kubenswrapper[4970]: E1209 12:42:18.143757 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8\": container with ID starting with 8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8 not found: ID does not exist" containerID="8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.143796 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8"} err="failed to get container status \"8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8\": rpc error: code = NotFound desc = could not find container \"8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8\": container with ID starting with 8875b349e6b85e471558725ad344459371d0c0e8f19149ce0f871ce7a6890ca8 not found: ID does not exist" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.143822 4970 scope.go:117] "RemoveContainer" containerID="5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.199892 4970 scope.go:117] "RemoveContainer" containerID="2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.223269 4970 scope.go:117] "RemoveContainer" containerID="fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.281772 4970 scope.go:117] "RemoveContainer" containerID="5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2" Dec 09 12:42:18 crc kubenswrapper[4970]: E1209 12:42:18.282471 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2\": container with ID starting with 5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2 not found: ID does not exist" containerID="5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.282515 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2"} err="failed to get container status \"5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2\": rpc error: code = NotFound desc = could not find container \"5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2\": container with ID starting with 5d4fe3c7a17c59433505f5c5a3e61cd93c99ade50d28fea0027b113c518014d2 not found: ID does not exist" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.282546 4970 scope.go:117] "RemoveContainer" containerID="2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50" Dec 09 12:42:18 crc kubenswrapper[4970]: E1209 12:42:18.282891 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50\": container with ID starting with 2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50 not found: ID does not exist" containerID="2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.282920 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50"} err="failed to get container status \"2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50\": rpc error: code = NotFound desc = could not find container \"2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50\": container with ID starting with 2448c0568f81f72fd436fb2f9ae0567a3b006bd2f02224f9c5890fe02fa4fc50 not found: ID does not exist" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.282935 4970 scope.go:117] "RemoveContainer" containerID="fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3" Dec 09 12:42:18 crc kubenswrapper[4970]: E1209 12:42:18.283221 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3\": container with ID starting with fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3 not found: ID does not exist" containerID="fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3" Dec 09 12:42:18 crc kubenswrapper[4970]: I1209 12:42:18.283278 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3"} err="failed to get container status \"fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3\": rpc error: code = NotFound desc = could not find container \"fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3\": container with ID starting with fe93b2bc2f3b746042d3ade1a716c3ea5f4b27eabf00b06e7649e75fb78c6ce3 not found: ID does not exist" Dec 09 12:42:19 crc kubenswrapper[4970]: I1209 12:42:19.883052 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" path="/var/lib/kubelet/pods/76534e73-ef3c-4b51-8136-768fdc2e4275/volumes" Dec 09 12:42:19 crc kubenswrapper[4970]: I1209 12:42:19.885131 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" path="/var/lib/kubelet/pods/fa9529c4-3904-4691-a0f8-9b9f988499c8/volumes" Dec 09 12:42:22 crc kubenswrapper[4970]: E1209 12:42:22.816049 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:42:23 crc kubenswrapper[4970]: E1209 12:42:23.814369 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:42:36 crc kubenswrapper[4970]: E1209 12:42:36.814735 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:42:37 crc kubenswrapper[4970]: E1209 12:42:37.822629 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:42:40 crc kubenswrapper[4970]: I1209 12:42:40.925842 4970 scope.go:117] "RemoveContainer" containerID="05eb626e45b3b8c8b61958e26d5c4aa8693cf6bb7833af2a2e7aa2fb9fc72b29" Dec 09 12:42:46 crc kubenswrapper[4970]: I1209 12:42:46.010722 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:42:46 crc kubenswrapper[4970]: I1209 12:42:46.011463 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:42:49 crc kubenswrapper[4970]: E1209 12:42:49.815882 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:42:49 crc kubenswrapper[4970]: E1209 12:42:49.815974 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:43:00 crc kubenswrapper[4970]: E1209 12:43:00.815542 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:43:02 crc kubenswrapper[4970]: E1209 12:43:02.815344 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:43:12 crc kubenswrapper[4970]: I1209 12:43:12.816169 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:43:12 crc kubenswrapper[4970]: E1209 12:43:12.924297 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:43:12 crc kubenswrapper[4970]: E1209 12:43:12.924640 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:43:12 crc kubenswrapper[4970]: E1209 12:43:12.924772 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:43:12 crc kubenswrapper[4970]: E1209 12:43:12.926542 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:43:16 crc kubenswrapper[4970]: I1209 12:43:16.011477 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:43:16 crc kubenswrapper[4970]: I1209 12:43:16.011821 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:43:17 crc kubenswrapper[4970]: E1209 12:43:17.824372 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:43:26 crc kubenswrapper[4970]: E1209 12:43:26.816564 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.101748 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g2p6k"] Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.102851 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="extract-utilities" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.102871 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="extract-utilities" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.102890 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="extract-content" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.102898 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="extract-content" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.102909 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="registry-server" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.102915 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="registry-server" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.102934 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="extract-content" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.102939 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="extract-content" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.102957 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="registry-server" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.102962 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="registry-server" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.102991 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="extract-utilities" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.102997 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="extract-utilities" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.103188 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="76534e73-ef3c-4b51-8136-768fdc2e4275" containerName="registry-server" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.103207 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa9529c4-3904-4691-a0f8-9b9f988499c8" containerName="registry-server" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.105148 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.121702 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2p6k"] Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.196700 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjlxc\" (UniqueName: \"kubernetes.io/projected/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-kube-api-access-sjlxc\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.196790 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-utilities\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.196947 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-catalog-content\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.299309 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-utilities\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.299617 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-catalog-content\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.299746 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjlxc\" (UniqueName: \"kubernetes.io/projected/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-kube-api-access-sjlxc\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.299891 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-utilities\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.300173 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-catalog-content\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.322184 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjlxc\" (UniqueName: \"kubernetes.io/projected/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-kube-api-access-sjlxc\") pod \"community-operators-g2p6k\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: I1209 12:43:31.435011 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.938090 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.938465 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.938626 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:43:31 crc kubenswrapper[4970]: E1209 12:43:31.940383 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:43:32 crc kubenswrapper[4970]: W1209 12:43:32.074636 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod512dc2b8_7fc9_4c76_8ad9_67d0b43be5b4.slice/crio-c669f232c37c983250de5470ff9904eb1c430faf50af0f6200dc8d01e7fee84e WatchSource:0}: Error finding container c669f232c37c983250de5470ff9904eb1c430faf50af0f6200dc8d01e7fee84e: Status 404 returned error can't find the container with id c669f232c37c983250de5470ff9904eb1c430faf50af0f6200dc8d01e7fee84e Dec 09 12:43:32 crc kubenswrapper[4970]: I1209 12:43:32.078226 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2p6k"] Dec 09 12:43:32 crc kubenswrapper[4970]: I1209 12:43:32.832800 4970 generic.go:334] "Generic (PLEG): container finished" podID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerID="85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124" exitCode=0 Dec 09 12:43:32 crc kubenswrapper[4970]: I1209 12:43:32.833003 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerDied","Data":"85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124"} Dec 09 12:43:32 crc kubenswrapper[4970]: I1209 12:43:32.833159 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerStarted","Data":"c669f232c37c983250de5470ff9904eb1c430faf50af0f6200dc8d01e7fee84e"} Dec 09 12:43:33 crc kubenswrapper[4970]: I1209 12:43:33.847654 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerStarted","Data":"5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd"} Dec 09 12:43:34 crc kubenswrapper[4970]: I1209 12:43:34.868828 4970 generic.go:334] "Generic (PLEG): container finished" podID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerID="5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd" exitCode=0 Dec 09 12:43:34 crc kubenswrapper[4970]: I1209 12:43:34.868922 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerDied","Data":"5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd"} Dec 09 12:43:35 crc kubenswrapper[4970]: I1209 12:43:35.881045 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerStarted","Data":"14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5"} Dec 09 12:43:35 crc kubenswrapper[4970]: I1209 12:43:35.903595 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g2p6k" podStartSLOduration=2.489355085 podStartE2EDuration="4.903575864s" podCreationTimestamp="2025-12-09 12:43:31 +0000 UTC" firstStartedPulling="2025-12-09 12:43:32.835579179 +0000 UTC m=+2225.396060230" lastFinishedPulling="2025-12-09 12:43:35.249799918 +0000 UTC m=+2227.810281009" observedRunningTime="2025-12-09 12:43:35.898201011 +0000 UTC m=+2228.458682062" watchObservedRunningTime="2025-12-09 12:43:35.903575864 +0000 UTC m=+2228.464056915" Dec 09 12:43:39 crc kubenswrapper[4970]: E1209 12:43:39.748887 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:43:41 crc kubenswrapper[4970]: I1209 12:43:41.435217 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:41 crc kubenswrapper[4970]: I1209 12:43:41.436570 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:41 crc kubenswrapper[4970]: I1209 12:43:41.500108 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:41 crc kubenswrapper[4970]: I1209 12:43:41.863666 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:41 crc kubenswrapper[4970]: I1209 12:43:41.917020 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2p6k"] Dec 09 12:43:43 crc kubenswrapper[4970]: I1209 12:43:43.822697 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g2p6k" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="registry-server" containerID="cri-o://14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5" gracePeriod=2 Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.300269 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.410786 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjlxc\" (UniqueName: \"kubernetes.io/projected/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-kube-api-access-sjlxc\") pod \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.411421 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-catalog-content\") pod \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.411706 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-utilities\") pod \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\" (UID: \"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4\") " Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.412852 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-utilities" (OuterVolumeSpecName: "utilities") pod "512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" (UID: "512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.420833 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-kube-api-access-sjlxc" (OuterVolumeSpecName: "kube-api-access-sjlxc") pod "512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" (UID: "512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4"). InnerVolumeSpecName "kube-api-access-sjlxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.465019 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" (UID: "512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.514890 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjlxc\" (UniqueName: \"kubernetes.io/projected/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-kube-api-access-sjlxc\") on node \"crc\" DevicePath \"\"" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.514931 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.514943 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.836097 4970 generic.go:334] "Generic (PLEG): container finished" podID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerID="14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5" exitCode=0 Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.836150 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerDied","Data":"14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5"} Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.836181 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2p6k" event={"ID":"512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4","Type":"ContainerDied","Data":"c669f232c37c983250de5470ff9904eb1c430faf50af0f6200dc8d01e7fee84e"} Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.836201 4970 scope.go:117] "RemoveContainer" containerID="14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.836365 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2p6k" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.872467 4970 scope.go:117] "RemoveContainer" containerID="5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.874747 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2p6k"] Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.891577 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g2p6k"] Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.901105 4970 scope.go:117] "RemoveContainer" containerID="85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.960487 4970 scope.go:117] "RemoveContainer" containerID="14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5" Dec 09 12:43:44 crc kubenswrapper[4970]: E1209 12:43:44.960979 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5\": container with ID starting with 14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5 not found: ID does not exist" containerID="14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.961057 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5"} err="failed to get container status \"14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5\": rpc error: code = NotFound desc = could not find container \"14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5\": container with ID starting with 14aafc3348955899511448a9d9d58a15c90aaac2ed929e8702b66a0113d9baf5 not found: ID does not exist" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.961089 4970 scope.go:117] "RemoveContainer" containerID="5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd" Dec 09 12:43:44 crc kubenswrapper[4970]: E1209 12:43:44.961697 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd\": container with ID starting with 5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd not found: ID does not exist" containerID="5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.961718 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd"} err="failed to get container status \"5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd\": rpc error: code = NotFound desc = could not find container \"5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd\": container with ID starting with 5b5bc27e997c1dc47305469dc328db7d9fa998bc415a150e3adccde5123f72fd not found: ID does not exist" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.961732 4970 scope.go:117] "RemoveContainer" containerID="85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124" Dec 09 12:43:44 crc kubenswrapper[4970]: E1209 12:43:44.962030 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124\": container with ID starting with 85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124 not found: ID does not exist" containerID="85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124" Dec 09 12:43:44 crc kubenswrapper[4970]: I1209 12:43:44.962047 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124"} err="failed to get container status \"85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124\": rpc error: code = NotFound desc = could not find container \"85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124\": container with ID starting with 85e7a8802a8e7a681216ab4f4677f00d986de1f9e65959d85ad8d5e61a630124 not found: ID does not exist" Dec 09 12:43:45 crc kubenswrapper[4970]: I1209 12:43:45.823955 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" path="/var/lib/kubelet/pods/512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4/volumes" Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.011578 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.012940 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.013184 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.014652 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.014988 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" gracePeriod=600 Dec 09 12:43:46 crc kubenswrapper[4970]: E1209 12:43:46.131930 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:43:46 crc kubenswrapper[4970]: E1209 12:43:46.815729 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.859990 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" exitCode=0 Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.860034 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70"} Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.860066 4970 scope.go:117] "RemoveContainer" containerID="ad7e46f490e04acfe3a1302091136ac653e4911c58628ac07f1e8fe09e09b651" Dec 09 12:43:46 crc kubenswrapper[4970]: I1209 12:43:46.860888 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:43:46 crc kubenswrapper[4970]: E1209 12:43:46.861189 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:43:54 crc kubenswrapper[4970]: E1209 12:43:54.815271 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:43:59 crc kubenswrapper[4970]: E1209 12:43:59.815822 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:44:01 crc kubenswrapper[4970]: I1209 12:44:01.813027 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:44:01 crc kubenswrapper[4970]: E1209 12:44:01.813652 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:44:07 crc kubenswrapper[4970]: E1209 12:44:07.825946 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:44:14 crc kubenswrapper[4970]: I1209 12:44:14.813942 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:44:14 crc kubenswrapper[4970]: E1209 12:44:14.814846 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:44:14 crc kubenswrapper[4970]: E1209 12:44:14.815615 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:44:19 crc kubenswrapper[4970]: I1209 12:44:19.242135 4970 generic.go:334] "Generic (PLEG): container finished" podID="b45bf430-1223-44e1-b791-212935f09b2a" containerID="364a999db7ce156ba81da0e43796f1933c4e7f65554562227d6499d02ce1102e" exitCode=2 Dec 09 12:44:19 crc kubenswrapper[4970]: I1209 12:44:19.242327 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" event={"ID":"b45bf430-1223-44e1-b791-212935f09b2a","Type":"ContainerDied","Data":"364a999db7ce156ba81da0e43796f1933c4e7f65554562227d6499d02ce1102e"} Dec 09 12:44:20 crc kubenswrapper[4970]: I1209 12:44:20.794643 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:44:20 crc kubenswrapper[4970]: E1209 12:44:20.814963 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:44:20 crc kubenswrapper[4970]: I1209 12:44:20.968875 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-inventory\") pod \"b45bf430-1223-44e1-b791-212935f09b2a\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " Dec 09 12:44:20 crc kubenswrapper[4970]: I1209 12:44:20.969192 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-ssh-key\") pod \"b45bf430-1223-44e1-b791-212935f09b2a\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " Dec 09 12:44:20 crc kubenswrapper[4970]: I1209 12:44:20.969275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbrss\" (UniqueName: \"kubernetes.io/projected/b45bf430-1223-44e1-b791-212935f09b2a-kube-api-access-mbrss\") pod \"b45bf430-1223-44e1-b791-212935f09b2a\" (UID: \"b45bf430-1223-44e1-b791-212935f09b2a\") " Dec 09 12:44:20 crc kubenswrapper[4970]: I1209 12:44:20.975494 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b45bf430-1223-44e1-b791-212935f09b2a-kube-api-access-mbrss" (OuterVolumeSpecName: "kube-api-access-mbrss") pod "b45bf430-1223-44e1-b791-212935f09b2a" (UID: "b45bf430-1223-44e1-b791-212935f09b2a"). InnerVolumeSpecName "kube-api-access-mbrss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.003390 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-inventory" (OuterVolumeSpecName: "inventory") pod "b45bf430-1223-44e1-b791-212935f09b2a" (UID: "b45bf430-1223-44e1-b791-212935f09b2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.005652 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b45bf430-1223-44e1-b791-212935f09b2a" (UID: "b45bf430-1223-44e1-b791-212935f09b2a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.072616 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.072651 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b45bf430-1223-44e1-b791-212935f09b2a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.072661 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbrss\" (UniqueName: \"kubernetes.io/projected/b45bf430-1223-44e1-b791-212935f09b2a-kube-api-access-mbrss\") on node \"crc\" DevicePath \"\"" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.264719 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" event={"ID":"b45bf430-1223-44e1-b791-212935f09b2a","Type":"ContainerDied","Data":"bec57bd1e018fce4194645b96cb972458569225edf7658c2de0b2b6ad84a0184"} Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.264763 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bec57bd1e018fce4194645b96cb972458569225edf7658c2de0b2b6ad84a0184" Dec 09 12:44:21 crc kubenswrapper[4970]: I1209 12:44:21.264818 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg" Dec 09 12:44:25 crc kubenswrapper[4970]: E1209 12:44:25.816682 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:44:27 crc kubenswrapper[4970]: I1209 12:44:27.820558 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:44:27 crc kubenswrapper[4970]: E1209 12:44:27.821154 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.032909 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x"] Dec 09 12:44:28 crc kubenswrapper[4970]: E1209 12:44:28.033609 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="extract-content" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.033627 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="extract-content" Dec 09 12:44:28 crc kubenswrapper[4970]: E1209 12:44:28.033642 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="registry-server" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.033649 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="registry-server" Dec 09 12:44:28 crc kubenswrapper[4970]: E1209 12:44:28.033662 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45bf430-1223-44e1-b791-212935f09b2a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.033669 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45bf430-1223-44e1-b791-212935f09b2a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:44:28 crc kubenswrapper[4970]: E1209 12:44:28.033702 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="extract-utilities" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.033707 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="extract-utilities" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.033919 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="512dc2b8-7fc9-4c76-8ad9-67d0b43be5b4" containerName="registry-server" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.033937 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45bf430-1223-44e1-b791-212935f09b2a" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.034846 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.038571 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.039103 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.039395 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.039580 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.049913 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x"] Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.143434 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.143544 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j6gg\" (UniqueName: \"kubernetes.io/projected/875c14b9-3ae4-43ed-b83f-78088737e656-kube-api-access-2j6gg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.143768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.245431 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.245548 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.245575 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j6gg\" (UniqueName: \"kubernetes.io/projected/875c14b9-3ae4-43ed-b83f-78088737e656-kube-api-access-2j6gg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.255848 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.263214 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.266927 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j6gg\" (UniqueName: \"kubernetes.io/projected/875c14b9-3ae4-43ed-b83f-78088737e656-kube-api-access-2j6gg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.360632 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:44:28 crc kubenswrapper[4970]: I1209 12:44:28.902623 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x"] Dec 09 12:44:29 crc kubenswrapper[4970]: I1209 12:44:29.361801 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" event={"ID":"875c14b9-3ae4-43ed-b83f-78088737e656","Type":"ContainerStarted","Data":"820ce717472458771389d23a1ccd1013a623716072fac8106c63f2c6860aa9c6"} Dec 09 12:44:30 crc kubenswrapper[4970]: I1209 12:44:30.373623 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" event={"ID":"875c14b9-3ae4-43ed-b83f-78088737e656","Type":"ContainerStarted","Data":"930b056c745e2c441a4f487e138a394bf8167eac02c41d0b6805133ccda6cea2"} Dec 09 12:44:30 crc kubenswrapper[4970]: I1209 12:44:30.390794 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" podStartSLOduration=1.816343464 podStartE2EDuration="2.390746933s" podCreationTimestamp="2025-12-09 12:44:28 +0000 UTC" firstStartedPulling="2025-12-09 12:44:28.902633404 +0000 UTC m=+2281.463114455" lastFinishedPulling="2025-12-09 12:44:29.477036873 +0000 UTC m=+2282.037517924" observedRunningTime="2025-12-09 12:44:30.389976542 +0000 UTC m=+2282.950457593" watchObservedRunningTime="2025-12-09 12:44:30.390746933 +0000 UTC m=+2282.951227994" Dec 09 12:44:34 crc kubenswrapper[4970]: E1209 12:44:34.814547 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:44:36 crc kubenswrapper[4970]: E1209 12:44:36.815143 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:44:40 crc kubenswrapper[4970]: I1209 12:44:40.813833 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:44:40 crc kubenswrapper[4970]: E1209 12:44:40.814721 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:44:45 crc kubenswrapper[4970]: E1209 12:44:45.814997 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:44:48 crc kubenswrapper[4970]: E1209 12:44:48.815102 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:44:55 crc kubenswrapper[4970]: I1209 12:44:55.813483 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:44:55 crc kubenswrapper[4970]: E1209 12:44:55.814507 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:44:57 crc kubenswrapper[4970]: E1209 12:44:57.822355 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.150903 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4"] Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.152720 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.164945 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.165308 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.168457 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4"] Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.292880 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d22e9d18-c6b7-4083-8870-0aec294dc268-secret-volume\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.293062 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d22e9d18-c6b7-4083-8870-0aec294dc268-config-volume\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.293095 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjsmk\" (UniqueName: \"kubernetes.io/projected/d22e9d18-c6b7-4083-8870-0aec294dc268-kube-api-access-vjsmk\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.395042 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjsmk\" (UniqueName: \"kubernetes.io/projected/d22e9d18-c6b7-4083-8870-0aec294dc268-kube-api-access-vjsmk\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.395184 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d22e9d18-c6b7-4083-8870-0aec294dc268-secret-volume\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.395318 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d22e9d18-c6b7-4083-8870-0aec294dc268-config-volume\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.396189 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d22e9d18-c6b7-4083-8870-0aec294dc268-config-volume\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.403884 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d22e9d18-c6b7-4083-8870-0aec294dc268-secret-volume\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.417632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjsmk\" (UniqueName: \"kubernetes.io/projected/d22e9d18-c6b7-4083-8870-0aec294dc268-kube-api-access-vjsmk\") pod \"collect-profiles-29421405-t66z4\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.479755 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:00 crc kubenswrapper[4970]: E1209 12:45:00.814593 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:45:00 crc kubenswrapper[4970]: I1209 12:45:00.967442 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4"] Dec 09 12:45:01 crc kubenswrapper[4970]: I1209 12:45:01.700439 4970 generic.go:334] "Generic (PLEG): container finished" podID="d22e9d18-c6b7-4083-8870-0aec294dc268" containerID="fe28c1cc2b7fc561a6d1b1d2650c6012e27cbc15b8c0cbdc8d1b783bba10ea06" exitCode=0 Dec 09 12:45:01 crc kubenswrapper[4970]: I1209 12:45:01.700591 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" event={"ID":"d22e9d18-c6b7-4083-8870-0aec294dc268","Type":"ContainerDied","Data":"fe28c1cc2b7fc561a6d1b1d2650c6012e27cbc15b8c0cbdc8d1b783bba10ea06"} Dec 09 12:45:01 crc kubenswrapper[4970]: I1209 12:45:01.700775 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" event={"ID":"d22e9d18-c6b7-4083-8870-0aec294dc268","Type":"ContainerStarted","Data":"5284f1fcd265322df7083ce3347702550ef070c87661b36dc34ca09d1e812a13"} Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.170940 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.278987 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d22e9d18-c6b7-4083-8870-0aec294dc268-config-volume\") pod \"d22e9d18-c6b7-4083-8870-0aec294dc268\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.279094 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjsmk\" (UniqueName: \"kubernetes.io/projected/d22e9d18-c6b7-4083-8870-0aec294dc268-kube-api-access-vjsmk\") pod \"d22e9d18-c6b7-4083-8870-0aec294dc268\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.279275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d22e9d18-c6b7-4083-8870-0aec294dc268-secret-volume\") pod \"d22e9d18-c6b7-4083-8870-0aec294dc268\" (UID: \"d22e9d18-c6b7-4083-8870-0aec294dc268\") " Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.279743 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d22e9d18-c6b7-4083-8870-0aec294dc268-config-volume" (OuterVolumeSpecName: "config-volume") pod "d22e9d18-c6b7-4083-8870-0aec294dc268" (UID: "d22e9d18-c6b7-4083-8870-0aec294dc268"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.280169 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d22e9d18-c6b7-4083-8870-0aec294dc268-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.285216 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d22e9d18-c6b7-4083-8870-0aec294dc268-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d22e9d18-c6b7-4083-8870-0aec294dc268" (UID: "d22e9d18-c6b7-4083-8870-0aec294dc268"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.285382 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d22e9d18-c6b7-4083-8870-0aec294dc268-kube-api-access-vjsmk" (OuterVolumeSpecName: "kube-api-access-vjsmk") pod "d22e9d18-c6b7-4083-8870-0aec294dc268" (UID: "d22e9d18-c6b7-4083-8870-0aec294dc268"). InnerVolumeSpecName "kube-api-access-vjsmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.382603 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjsmk\" (UniqueName: \"kubernetes.io/projected/d22e9d18-c6b7-4083-8870-0aec294dc268-kube-api-access-vjsmk\") on node \"crc\" DevicePath \"\"" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.382635 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d22e9d18-c6b7-4083-8870-0aec294dc268-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.724770 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" event={"ID":"d22e9d18-c6b7-4083-8870-0aec294dc268","Type":"ContainerDied","Data":"5284f1fcd265322df7083ce3347702550ef070c87661b36dc34ca09d1e812a13"} Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.725135 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5284f1fcd265322df7083ce3347702550ef070c87661b36dc34ca09d1e812a13" Dec 09 12:45:03 crc kubenswrapper[4970]: I1209 12:45:03.725206 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4" Dec 09 12:45:04 crc kubenswrapper[4970]: I1209 12:45:04.251818 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb"] Dec 09 12:45:04 crc kubenswrapper[4970]: I1209 12:45:04.262617 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421360-vvgqb"] Dec 09 12:45:05 crc kubenswrapper[4970]: I1209 12:45:05.831026 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96aedd83-c6c1-4b08-8d47-43cd63aaae68" path="/var/lib/kubelet/pods/96aedd83-c6c1-4b08-8d47-43cd63aaae68/volumes" Dec 09 12:45:08 crc kubenswrapper[4970]: I1209 12:45:08.813081 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:45:08 crc kubenswrapper[4970]: E1209 12:45:08.814044 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:45:08 crc kubenswrapper[4970]: E1209 12:45:08.814773 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:45:11 crc kubenswrapper[4970]: E1209 12:45:11.816380 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:45:21 crc kubenswrapper[4970]: E1209 12:45:21.815666 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:45:23 crc kubenswrapper[4970]: I1209 12:45:23.813384 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:45:23 crc kubenswrapper[4970]: E1209 12:45:23.814402 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:45:23 crc kubenswrapper[4970]: E1209 12:45:23.816132 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:45:34 crc kubenswrapper[4970]: I1209 12:45:34.813211 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:45:34 crc kubenswrapper[4970]: E1209 12:45:34.813719 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:45:34 crc kubenswrapper[4970]: E1209 12:45:34.814954 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:45:38 crc kubenswrapper[4970]: E1209 12:45:38.817458 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:45:41 crc kubenswrapper[4970]: I1209 12:45:41.125823 4970 scope.go:117] "RemoveContainer" containerID="05f541831293b31c7d001d88165312eb259ea66b064296a1f5986692ed85fcba" Dec 09 12:45:47 crc kubenswrapper[4970]: E1209 12:45:47.825611 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:45:48 crc kubenswrapper[4970]: I1209 12:45:48.812819 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:45:48 crc kubenswrapper[4970]: E1209 12:45:48.813732 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:45:53 crc kubenswrapper[4970]: E1209 12:45:53.814633 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:46:00 crc kubenswrapper[4970]: E1209 12:46:00.815915 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:46:01 crc kubenswrapper[4970]: I1209 12:46:01.813142 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:46:01 crc kubenswrapper[4970]: E1209 12:46:01.813575 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:46:06 crc kubenswrapper[4970]: E1209 12:46:06.816182 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:46:15 crc kubenswrapper[4970]: E1209 12:46:15.814808 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:46:16 crc kubenswrapper[4970]: I1209 12:46:16.813364 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:46:16 crc kubenswrapper[4970]: E1209 12:46:16.814366 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:46:17 crc kubenswrapper[4970]: E1209 12:46:17.823592 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:46:27 crc kubenswrapper[4970]: E1209 12:46:27.823508 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:46:29 crc kubenswrapper[4970]: E1209 12:46:29.815229 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:46:30 crc kubenswrapper[4970]: I1209 12:46:30.813187 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:46:30 crc kubenswrapper[4970]: E1209 12:46:30.813697 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:46:40 crc kubenswrapper[4970]: E1209 12:46:40.123331 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:46:44 crc kubenswrapper[4970]: I1209 12:46:44.813220 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:46:44 crc kubenswrapper[4970]: E1209 12:46:44.814293 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:46:44 crc kubenswrapper[4970]: E1209 12:46:44.817155 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:46:54 crc kubenswrapper[4970]: E1209 12:46:54.817006 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:46:55 crc kubenswrapper[4970]: I1209 12:46:55.813677 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:46:55 crc kubenswrapper[4970]: E1209 12:46:55.814429 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:46:57 crc kubenswrapper[4970]: E1209 12:46:57.822371 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:47:05 crc kubenswrapper[4970]: E1209 12:47:05.815831 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:47:07 crc kubenswrapper[4970]: I1209 12:47:07.820577 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:47:07 crc kubenswrapper[4970]: E1209 12:47:07.821203 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:47:09 crc kubenswrapper[4970]: E1209 12:47:09.815836 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:47:16 crc kubenswrapper[4970]: E1209 12:47:16.814945 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:47:21 crc kubenswrapper[4970]: E1209 12:47:21.816018 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:47:22 crc kubenswrapper[4970]: I1209 12:47:22.813200 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:47:22 crc kubenswrapper[4970]: E1209 12:47:22.814224 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:47:27 crc kubenswrapper[4970]: E1209 12:47:27.825215 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:47:34 crc kubenswrapper[4970]: E1209 12:47:34.815412 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:47:35 crc kubenswrapper[4970]: I1209 12:47:35.814061 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:47:35 crc kubenswrapper[4970]: E1209 12:47:35.815045 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:47:42 crc kubenswrapper[4970]: E1209 12:47:42.815100 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:47:49 crc kubenswrapper[4970]: I1209 12:47:49.813454 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:47:49 crc kubenswrapper[4970]: E1209 12:47:49.814032 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:47:49 crc kubenswrapper[4970]: E1209 12:47:49.816907 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:47:57 crc kubenswrapper[4970]: E1209 12:47:57.822414 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:48:01 crc kubenswrapper[4970]: E1209 12:48:01.815200 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:48:02 crc kubenswrapper[4970]: I1209 12:48:02.818900 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:48:02 crc kubenswrapper[4970]: E1209 12:48:02.819868 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:48:09 crc kubenswrapper[4970]: E1209 12:48:09.816044 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:48:13 crc kubenswrapper[4970]: E1209 12:48:13.817056 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:48:15 crc kubenswrapper[4970]: I1209 12:48:15.814532 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:48:15 crc kubenswrapper[4970]: E1209 12:48:15.815789 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:48:20 crc kubenswrapper[4970]: I1209 12:48:20.815726 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:48:20 crc kubenswrapper[4970]: E1209 12:48:20.939955 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:48:20 crc kubenswrapper[4970]: E1209 12:48:20.940370 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:48:20 crc kubenswrapper[4970]: E1209 12:48:20.940710 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:48:20 crc kubenswrapper[4970]: E1209 12:48:20.942073 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:48:24 crc kubenswrapper[4970]: E1209 12:48:24.818550 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:48:28 crc kubenswrapper[4970]: I1209 12:48:28.813222 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:48:28 crc kubenswrapper[4970]: E1209 12:48:28.813850 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:48:32 crc kubenswrapper[4970]: E1209 12:48:32.815138 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:48:35 crc kubenswrapper[4970]: E1209 12:48:35.960592 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:48:35 crc kubenswrapper[4970]: E1209 12:48:35.961002 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:48:35 crc kubenswrapper[4970]: E1209 12:48:35.961376 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:48:35 crc kubenswrapper[4970]: E1209 12:48:35.962932 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:48:39 crc kubenswrapper[4970]: I1209 12:48:39.812304 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:48:39 crc kubenswrapper[4970]: E1209 12:48:39.813200 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:48:46 crc kubenswrapper[4970]: E1209 12:48:46.814290 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:48:46 crc kubenswrapper[4970]: E1209 12:48:46.815438 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:48:52 crc kubenswrapper[4970]: I1209 12:48:52.813974 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:48:53 crc kubenswrapper[4970]: I1209 12:48:53.635846 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"cb7e8adc348c4ec07e615d344d935f9200b7a44c97bbc9039ef571da3563f276"} Dec 09 12:48:58 crc kubenswrapper[4970]: E1209 12:48:58.817886 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:49:00 crc kubenswrapper[4970]: E1209 12:49:00.815920 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:49:09 crc kubenswrapper[4970]: E1209 12:49:09.816435 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:49:12 crc kubenswrapper[4970]: E1209 12:49:12.815528 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:49:23 crc kubenswrapper[4970]: E1209 12:49:23.815408 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:49:26 crc kubenswrapper[4970]: E1209 12:49:26.815047 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:49:35 crc kubenswrapper[4970]: E1209 12:49:35.816169 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:49:39 crc kubenswrapper[4970]: E1209 12:49:39.815213 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:49:47 crc kubenswrapper[4970]: E1209 12:49:47.824731 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:49:53 crc kubenswrapper[4970]: E1209 12:49:53.814624 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:50:01 crc kubenswrapper[4970]: E1209 12:50:01.816139 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:50:06 crc kubenswrapper[4970]: E1209 12:50:06.815066 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:50:12 crc kubenswrapper[4970]: E1209 12:50:12.815985 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:50:21 crc kubenswrapper[4970]: E1209 12:50:21.814801 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:50:23 crc kubenswrapper[4970]: E1209 12:50:23.818386 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:50:36 crc kubenswrapper[4970]: E1209 12:50:36.814743 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:50:37 crc kubenswrapper[4970]: E1209 12:50:37.824007 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:50:49 crc kubenswrapper[4970]: E1209 12:50:49.815591 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:50:52 crc kubenswrapper[4970]: E1209 12:50:52.817113 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:51:00 crc kubenswrapper[4970]: I1209 12:51:00.074337 4970 generic.go:334] "Generic (PLEG): container finished" podID="875c14b9-3ae4-43ed-b83f-78088737e656" containerID="930b056c745e2c441a4f487e138a394bf8167eac02c41d0b6805133ccda6cea2" exitCode=2 Dec 09 12:51:00 crc kubenswrapper[4970]: I1209 12:51:00.074421 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" event={"ID":"875c14b9-3ae4-43ed-b83f-78088737e656","Type":"ContainerDied","Data":"930b056c745e2c441a4f487e138a394bf8167eac02c41d0b6805133ccda6cea2"} Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.587665 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.703140 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j6gg\" (UniqueName: \"kubernetes.io/projected/875c14b9-3ae4-43ed-b83f-78088737e656-kube-api-access-2j6gg\") pod \"875c14b9-3ae4-43ed-b83f-78088737e656\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.703571 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-ssh-key\") pod \"875c14b9-3ae4-43ed-b83f-78088737e656\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.703617 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-inventory\") pod \"875c14b9-3ae4-43ed-b83f-78088737e656\" (UID: \"875c14b9-3ae4-43ed-b83f-78088737e656\") " Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.712507 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/875c14b9-3ae4-43ed-b83f-78088737e656-kube-api-access-2j6gg" (OuterVolumeSpecName: "kube-api-access-2j6gg") pod "875c14b9-3ae4-43ed-b83f-78088737e656" (UID: "875c14b9-3ae4-43ed-b83f-78088737e656"). InnerVolumeSpecName "kube-api-access-2j6gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.739589 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-inventory" (OuterVolumeSpecName: "inventory") pod "875c14b9-3ae4-43ed-b83f-78088737e656" (UID: "875c14b9-3ae4-43ed-b83f-78088737e656"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.769987 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "875c14b9-3ae4-43ed-b83f-78088737e656" (UID: "875c14b9-3ae4-43ed-b83f-78088737e656"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.807466 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.808033 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/875c14b9-3ae4-43ed-b83f-78088737e656-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:51:01 crc kubenswrapper[4970]: I1209 12:51:01.808118 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j6gg\" (UniqueName: \"kubernetes.io/projected/875c14b9-3ae4-43ed-b83f-78088737e656-kube-api-access-2j6gg\") on node \"crc\" DevicePath \"\"" Dec 09 12:51:01 crc kubenswrapper[4970]: E1209 12:51:01.816982 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:51:02 crc kubenswrapper[4970]: I1209 12:51:02.098365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" event={"ID":"875c14b9-3ae4-43ed-b83f-78088737e656","Type":"ContainerDied","Data":"820ce717472458771389d23a1ccd1013a623716072fac8106c63f2c6860aa9c6"} Dec 09 12:51:02 crc kubenswrapper[4970]: I1209 12:51:02.098625 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="820ce717472458771389d23a1ccd1013a623716072fac8106c63f2c6860aa9c6" Dec 09 12:51:02 crc kubenswrapper[4970]: I1209 12:51:02.098476 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x" Dec 09 12:51:06 crc kubenswrapper[4970]: E1209 12:51:06.816013 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:51:12 crc kubenswrapper[4970]: E1209 12:51:12.815719 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:51:16 crc kubenswrapper[4970]: I1209 12:51:16.010867 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:51:16 crc kubenswrapper[4970]: I1209 12:51:16.011220 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:51:18 crc kubenswrapper[4970]: E1209 12:51:18.816365 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.044966 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6"] Dec 09 12:51:19 crc kubenswrapper[4970]: E1209 12:51:19.045629 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="875c14b9-3ae4-43ed-b83f-78088737e656" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.045652 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="875c14b9-3ae4-43ed-b83f-78088737e656" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:51:19 crc kubenswrapper[4970]: E1209 12:51:19.045722 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d22e9d18-c6b7-4083-8870-0aec294dc268" containerName="collect-profiles" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.045731 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d22e9d18-c6b7-4083-8870-0aec294dc268" containerName="collect-profiles" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.046014 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="875c14b9-3ae4-43ed-b83f-78088737e656" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.046059 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d22e9d18-c6b7-4083-8870-0aec294dc268" containerName="collect-profiles" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.047139 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.052009 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.052886 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.053045 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.053271 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.061866 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6"] Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.092486 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h276l\" (UniqueName: \"kubernetes.io/projected/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-kube-api-access-h276l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.092690 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.092751 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.195621 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h276l\" (UniqueName: \"kubernetes.io/projected/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-kube-api-access-h276l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.195890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.195935 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.201952 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.202367 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.221364 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h276l\" (UniqueName: \"kubernetes.io/projected/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-kube-api-access-h276l\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-b72h6\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.386529 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:51:19 crc kubenswrapper[4970]: I1209 12:51:19.983992 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6"] Dec 09 12:51:20 crc kubenswrapper[4970]: I1209 12:51:20.406811 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" event={"ID":"ab4e637c-e74e-4e8b-9d81-98eadd755fc3","Type":"ContainerStarted","Data":"efb560909923111f93983076c6794effff58d758cf3eaddda699a8c2cfb46fd8"} Dec 09 12:51:21 crc kubenswrapper[4970]: I1209 12:51:21.419483 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" event={"ID":"ab4e637c-e74e-4e8b-9d81-98eadd755fc3","Type":"ContainerStarted","Data":"a7a45005ba387247d6e88ec9a67ade769da19da35f3f6b45239334ccb2f1c523"} Dec 09 12:51:21 crc kubenswrapper[4970]: I1209 12:51:21.446967 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" podStartSLOduration=1.901869119 podStartE2EDuration="2.44694919s" podCreationTimestamp="2025-12-09 12:51:19 +0000 UTC" firstStartedPulling="2025-12-09 12:51:19.993435629 +0000 UTC m=+2692.553916680" lastFinishedPulling="2025-12-09 12:51:20.5385157 +0000 UTC m=+2693.098996751" observedRunningTime="2025-12-09 12:51:21.436731935 +0000 UTC m=+2693.997213016" watchObservedRunningTime="2025-12-09 12:51:21.44694919 +0000 UTC m=+2694.007430241" Dec 09 12:51:27 crc kubenswrapper[4970]: E1209 12:51:27.838222 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:51:30 crc kubenswrapper[4970]: E1209 12:51:30.813970 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:51:38 crc kubenswrapper[4970]: E1209 12:51:38.816493 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:51:44 crc kubenswrapper[4970]: E1209 12:51:44.815878 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:51:46 crc kubenswrapper[4970]: I1209 12:51:46.010701 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:51:46 crc kubenswrapper[4970]: I1209 12:51:46.011089 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:51:50 crc kubenswrapper[4970]: E1209 12:51:50.814844 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:51:58 crc kubenswrapper[4970]: E1209 12:51:58.816324 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.142760 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s6m9g"] Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.155943 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.175560 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6m9g"] Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.306666 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-utilities\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.306848 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-catalog-content\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.306891 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlxp\" (UniqueName: \"kubernetes.io/projected/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-kube-api-access-bjlxp\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.408690 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlxp\" (UniqueName: \"kubernetes.io/projected/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-kube-api-access-bjlxp\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.409037 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-utilities\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.409422 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-catalog-content\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.409594 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-utilities\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.409944 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-catalog-content\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.437065 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlxp\" (UniqueName: \"kubernetes.io/projected/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-kube-api-access-bjlxp\") pod \"redhat-operators-s6m9g\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:00 crc kubenswrapper[4970]: I1209 12:52:00.492790 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:01 crc kubenswrapper[4970]: I1209 12:52:01.042758 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6m9g"] Dec 09 12:52:01 crc kubenswrapper[4970]: W1209 12:52:01.052475 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1bc7a39_c850_4ede_bf9f_a5d184a607a3.slice/crio-032ca4f2bc465ff26f0800d2b05a59f650e2ca48f224bb4419bdc33ba9538b92 WatchSource:0}: Error finding container 032ca4f2bc465ff26f0800d2b05a59f650e2ca48f224bb4419bdc33ba9538b92: Status 404 returned error can't find the container with id 032ca4f2bc465ff26f0800d2b05a59f650e2ca48f224bb4419bdc33ba9538b92 Dec 09 12:52:01 crc kubenswrapper[4970]: I1209 12:52:01.921419 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerID="0be0611f38c3a36890079738fd96ecfba8d3b5745be79cfa85c820f7e62d2c01" exitCode=0 Dec 09 12:52:01 crc kubenswrapper[4970]: I1209 12:52:01.921470 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerDied","Data":"0be0611f38c3a36890079738fd96ecfba8d3b5745be79cfa85c820f7e62d2c01"} Dec 09 12:52:01 crc kubenswrapper[4970]: I1209 12:52:01.921503 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerStarted","Data":"032ca4f2bc465ff26f0800d2b05a59f650e2ca48f224bb4419bdc33ba9538b92"} Dec 09 12:52:02 crc kubenswrapper[4970]: I1209 12:52:02.945334 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerStarted","Data":"b25a1db874f41ed6c451c81aba9e5af57a334516d3508941a85dc541829d4631"} Dec 09 12:52:05 crc kubenswrapper[4970]: E1209 12:52:05.816560 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:52:06 crc kubenswrapper[4970]: I1209 12:52:06.993125 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerID="b25a1db874f41ed6c451c81aba9e5af57a334516d3508941a85dc541829d4631" exitCode=0 Dec 09 12:52:06 crc kubenswrapper[4970]: I1209 12:52:06.993265 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerDied","Data":"b25a1db874f41ed6c451c81aba9e5af57a334516d3508941a85dc541829d4631"} Dec 09 12:52:08 crc kubenswrapper[4970]: I1209 12:52:08.006941 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerStarted","Data":"aae4c353626a190729cfa81953fd1b29812271d4a7a1fe411a36749a75a5b84b"} Dec 09 12:52:08 crc kubenswrapper[4970]: I1209 12:52:08.049487 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s6m9g" podStartSLOduration=2.593046635 podStartE2EDuration="8.049460139s" podCreationTimestamp="2025-12-09 12:52:00 +0000 UTC" firstStartedPulling="2025-12-09 12:52:01.923333488 +0000 UTC m=+2734.483814539" lastFinishedPulling="2025-12-09 12:52:07.379746992 +0000 UTC m=+2739.940228043" observedRunningTime="2025-12-09 12:52:08.030406815 +0000 UTC m=+2740.590887896" watchObservedRunningTime="2025-12-09 12:52:08.049460139 +0000 UTC m=+2740.609941200" Dec 09 12:52:10 crc kubenswrapper[4970]: I1209 12:52:10.492906 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:10 crc kubenswrapper[4970]: I1209 12:52:10.493485 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:11 crc kubenswrapper[4970]: I1209 12:52:11.604738 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s6m9g" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="registry-server" probeResult="failure" output=< Dec 09 12:52:11 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 12:52:11 crc kubenswrapper[4970]: > Dec 09 12:52:13 crc kubenswrapper[4970]: E1209 12:52:13.821137 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:52:16 crc kubenswrapper[4970]: I1209 12:52:16.011257 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:52:16 crc kubenswrapper[4970]: I1209 12:52:16.011542 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:52:16 crc kubenswrapper[4970]: I1209 12:52:16.011579 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:52:16 crc kubenswrapper[4970]: I1209 12:52:16.012506 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb7e8adc348c4ec07e615d344d935f9200b7a44c97bbc9039ef571da3563f276"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:52:16 crc kubenswrapper[4970]: I1209 12:52:16.012568 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://cb7e8adc348c4ec07e615d344d935f9200b7a44c97bbc9039ef571da3563f276" gracePeriod=600 Dec 09 12:52:17 crc kubenswrapper[4970]: I1209 12:52:17.116522 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="cb7e8adc348c4ec07e615d344d935f9200b7a44c97bbc9039ef571da3563f276" exitCode=0 Dec 09 12:52:17 crc kubenswrapper[4970]: I1209 12:52:17.116663 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"cb7e8adc348c4ec07e615d344d935f9200b7a44c97bbc9039ef571da3563f276"} Dec 09 12:52:17 crc kubenswrapper[4970]: I1209 12:52:17.116950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74"} Dec 09 12:52:17 crc kubenswrapper[4970]: I1209 12:52:17.116971 4970 scope.go:117] "RemoveContainer" containerID="be63a7b3e0b75073001cb5f12f2434f519a3eef9134f49397aff207cf8a73d70" Dec 09 12:52:18 crc kubenswrapper[4970]: E1209 12:52:18.817742 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:52:20 crc kubenswrapper[4970]: I1209 12:52:20.569158 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:20 crc kubenswrapper[4970]: I1209 12:52:20.640158 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:20 crc kubenswrapper[4970]: I1209 12:52:20.814718 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6m9g"] Dec 09 12:52:22 crc kubenswrapper[4970]: I1209 12:52:22.187691 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s6m9g" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="registry-server" containerID="cri-o://aae4c353626a190729cfa81953fd1b29812271d4a7a1fe411a36749a75a5b84b" gracePeriod=2 Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.204813 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerID="aae4c353626a190729cfa81953fd1b29812271d4a7a1fe411a36749a75a5b84b" exitCode=0 Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.204859 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerDied","Data":"aae4c353626a190729cfa81953fd1b29812271d4a7a1fe411a36749a75a5b84b"} Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.322403 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.424797 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjlxp\" (UniqueName: \"kubernetes.io/projected/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-kube-api-access-bjlxp\") pod \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.424994 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-catalog-content\") pod \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.425060 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-utilities\") pod \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\" (UID: \"d1bc7a39-c850-4ede-bf9f-a5d184a607a3\") " Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.426269 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-utilities" (OuterVolumeSpecName: "utilities") pod "d1bc7a39-c850-4ede-bf9f-a5d184a607a3" (UID: "d1bc7a39-c850-4ede-bf9f-a5d184a607a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.434585 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-kube-api-access-bjlxp" (OuterVolumeSpecName: "kube-api-access-bjlxp") pod "d1bc7a39-c850-4ede-bf9f-a5d184a607a3" (UID: "d1bc7a39-c850-4ede-bf9f-a5d184a607a3"). InnerVolumeSpecName "kube-api-access-bjlxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.528014 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjlxp\" (UniqueName: \"kubernetes.io/projected/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-kube-api-access-bjlxp\") on node \"crc\" DevicePath \"\"" Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.528047 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.568550 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1bc7a39-c850-4ede-bf9f-a5d184a607a3" (UID: "d1bc7a39-c850-4ede-bf9f-a5d184a607a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:52:23 crc kubenswrapper[4970]: I1209 12:52:23.632014 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1bc7a39-c850-4ede-bf9f-a5d184a607a3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.225350 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6m9g" event={"ID":"d1bc7a39-c850-4ede-bf9f-a5d184a607a3","Type":"ContainerDied","Data":"032ca4f2bc465ff26f0800d2b05a59f650e2ca48f224bb4419bdc33ba9538b92"} Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.225416 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6m9g" Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.225734 4970 scope.go:117] "RemoveContainer" containerID="aae4c353626a190729cfa81953fd1b29812271d4a7a1fe411a36749a75a5b84b" Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.262054 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6m9g"] Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.262187 4970 scope.go:117] "RemoveContainer" containerID="b25a1db874f41ed6c451c81aba9e5af57a334516d3508941a85dc541829d4631" Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.276915 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s6m9g"] Dec 09 12:52:24 crc kubenswrapper[4970]: I1209 12:52:24.319540 4970 scope.go:117] "RemoveContainer" containerID="0be0611f38c3a36890079738fd96ecfba8d3b5745be79cfa85c820f7e62d2c01" Dec 09 12:52:25 crc kubenswrapper[4970]: I1209 12:52:25.836103 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" path="/var/lib/kubelet/pods/d1bc7a39-c850-4ede-bf9f-a5d184a607a3/volumes" Dec 09 12:52:26 crc kubenswrapper[4970]: E1209 12:52:26.814541 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:52:32 crc kubenswrapper[4970]: E1209 12:52:32.815209 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:52:41 crc kubenswrapper[4970]: E1209 12:52:41.815024 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:52:43 crc kubenswrapper[4970]: E1209 12:52:43.817514 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:52:55 crc kubenswrapper[4970]: E1209 12:52:55.816916 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:52:56 crc kubenswrapper[4970]: E1209 12:52:56.816188 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:53:07 crc kubenswrapper[4970]: E1209 12:53:07.825650 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:53:09 crc kubenswrapper[4970]: E1209 12:53:09.815036 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:53:18 crc kubenswrapper[4970]: E1209 12:53:18.822519 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:53:21 crc kubenswrapper[4970]: I1209 12:53:21.815182 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:53:21 crc kubenswrapper[4970]: E1209 12:53:21.936074 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:53:21 crc kubenswrapper[4970]: E1209 12:53:21.936132 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:53:21 crc kubenswrapper[4970]: E1209 12:53:21.936269 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:53:21 crc kubenswrapper[4970]: E1209 12:53:21.937733 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.048038 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wgkhn"] Dec 09 12:53:32 crc kubenswrapper[4970]: E1209 12:53:32.053944 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="extract-content" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.053991 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="extract-content" Dec 09 12:53:32 crc kubenswrapper[4970]: E1209 12:53:32.054032 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="registry-server" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.054044 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="registry-server" Dec 09 12:53:32 crc kubenswrapper[4970]: E1209 12:53:32.054075 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="extract-utilities" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.054088 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="extract-utilities" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.054553 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1bc7a39-c850-4ede-bf9f-a5d184a607a3" containerName="registry-server" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.057472 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.077675 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgkhn"] Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.173065 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-catalog-content\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.173427 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlq6j\" (UniqueName: \"kubernetes.io/projected/94a4b656-ce1b-4619-800c-5162727ba8a8-kube-api-access-nlq6j\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.173464 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-utilities\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.275749 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-utilities\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.276644 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlq6j\" (UniqueName: \"kubernetes.io/projected/94a4b656-ce1b-4619-800c-5162727ba8a8-kube-api-access-nlq6j\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.276557 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-utilities\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.277933 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-catalog-content\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.278440 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-catalog-content\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.297629 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlq6j\" (UniqueName: \"kubernetes.io/projected/94a4b656-ce1b-4619-800c-5162727ba8a8-kube-api-access-nlq6j\") pod \"certified-operators-wgkhn\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.408830 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:32 crc kubenswrapper[4970]: E1209 12:53:32.814404 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:53:32 crc kubenswrapper[4970]: I1209 12:53:32.944685 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wgkhn"] Dec 09 12:53:33 crc kubenswrapper[4970]: I1209 12:53:33.143352 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerStarted","Data":"ca36f2eecb63b6b6b093bc03c57570b19d4e68c960afb04c3cb7ce0f2c75e917"} Dec 09 12:53:33 crc kubenswrapper[4970]: E1209 12:53:33.815132 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:53:34 crc kubenswrapper[4970]: I1209 12:53:34.162719 4970 generic.go:334] "Generic (PLEG): container finished" podID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerID="3aefe4660de8c8801dbfc6de9a29c7a3b880024ef61a38fd0d34f1214e755795" exitCode=0 Dec 09 12:53:34 crc kubenswrapper[4970]: I1209 12:53:34.162838 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerDied","Data":"3aefe4660de8c8801dbfc6de9a29c7a3b880024ef61a38fd0d34f1214e755795"} Dec 09 12:53:35 crc kubenswrapper[4970]: I1209 12:53:35.176463 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerStarted","Data":"c61b3e6bd8449c97c13621ee061f0d2a9b78a535441c8e5fb0ddfd4d7a4bcd92"} Dec 09 12:53:36 crc kubenswrapper[4970]: I1209 12:53:36.192072 4970 generic.go:334] "Generic (PLEG): container finished" podID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerID="c61b3e6bd8449c97c13621ee061f0d2a9b78a535441c8e5fb0ddfd4d7a4bcd92" exitCode=0 Dec 09 12:53:36 crc kubenswrapper[4970]: I1209 12:53:36.192126 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerDied","Data":"c61b3e6bd8449c97c13621ee061f0d2a9b78a535441c8e5fb0ddfd4d7a4bcd92"} Dec 09 12:53:37 crc kubenswrapper[4970]: I1209 12:53:37.208100 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerStarted","Data":"2979bd1736832fd88369fe6fa0774d5a89365b127d29e506b0f004a7c4894a6d"} Dec 09 12:53:37 crc kubenswrapper[4970]: I1209 12:53:37.234915 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wgkhn" podStartSLOduration=2.802583801 podStartE2EDuration="5.234896933s" podCreationTimestamp="2025-12-09 12:53:32 +0000 UTC" firstStartedPulling="2025-12-09 12:53:34.165584128 +0000 UTC m=+2826.726065189" lastFinishedPulling="2025-12-09 12:53:36.59789728 +0000 UTC m=+2829.158378321" observedRunningTime="2025-12-09 12:53:37.22775002 +0000 UTC m=+2829.788231071" watchObservedRunningTime="2025-12-09 12:53:37.234896933 +0000 UTC m=+2829.795377984" Dec 09 12:53:42 crc kubenswrapper[4970]: I1209 12:53:42.410643 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:42 crc kubenswrapper[4970]: I1209 12:53:42.412372 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:42 crc kubenswrapper[4970]: I1209 12:53:42.470303 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:43 crc kubenswrapper[4970]: I1209 12:53:43.347039 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:43 crc kubenswrapper[4970]: I1209 12:53:43.418646 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgkhn"] Dec 09 12:53:43 crc kubenswrapper[4970]: E1209 12:53:43.824576 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:53:45 crc kubenswrapper[4970]: I1209 12:53:45.303239 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wgkhn" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="registry-server" containerID="cri-o://2979bd1736832fd88369fe6fa0774d5a89365b127d29e506b0f004a7c4894a6d" gracePeriod=2 Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.316614 4970 generic.go:334] "Generic (PLEG): container finished" podID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerID="2979bd1736832fd88369fe6fa0774d5a89365b127d29e506b0f004a7c4894a6d" exitCode=0 Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.316709 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerDied","Data":"2979bd1736832fd88369fe6fa0774d5a89365b127d29e506b0f004a7c4894a6d"} Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.967770 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:46 crc kubenswrapper[4970]: E1209 12:53:46.971036 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:53:46 crc kubenswrapper[4970]: E1209 12:53:46.971094 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:53:46 crc kubenswrapper[4970]: E1209 12:53:46.971220 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:53:46 crc kubenswrapper[4970]: E1209 12:53:46.972390 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.979351 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-utilities\") pod \"94a4b656-ce1b-4619-800c-5162727ba8a8\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.979655 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlq6j\" (UniqueName: \"kubernetes.io/projected/94a4b656-ce1b-4619-800c-5162727ba8a8-kube-api-access-nlq6j\") pod \"94a4b656-ce1b-4619-800c-5162727ba8a8\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.979849 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-catalog-content\") pod \"94a4b656-ce1b-4619-800c-5162727ba8a8\" (UID: \"94a4b656-ce1b-4619-800c-5162727ba8a8\") " Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.980376 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-utilities" (OuterVolumeSpecName: "utilities") pod "94a4b656-ce1b-4619-800c-5162727ba8a8" (UID: "94a4b656-ce1b-4619-800c-5162727ba8a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.981391 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:53:46 crc kubenswrapper[4970]: I1209 12:53:46.988973 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a4b656-ce1b-4619-800c-5162727ba8a8-kube-api-access-nlq6j" (OuterVolumeSpecName: "kube-api-access-nlq6j") pod "94a4b656-ce1b-4619-800c-5162727ba8a8" (UID: "94a4b656-ce1b-4619-800c-5162727ba8a8"). InnerVolumeSpecName "kube-api-access-nlq6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.042770 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a4b656-ce1b-4619-800c-5162727ba8a8" (UID: "94a4b656-ce1b-4619-800c-5162727ba8a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.082523 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlq6j\" (UniqueName: \"kubernetes.io/projected/94a4b656-ce1b-4619-800c-5162727ba8a8-kube-api-access-nlq6j\") on node \"crc\" DevicePath \"\"" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.082560 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a4b656-ce1b-4619-800c-5162727ba8a8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.331020 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wgkhn" event={"ID":"94a4b656-ce1b-4619-800c-5162727ba8a8","Type":"ContainerDied","Data":"ca36f2eecb63b6b6b093bc03c57570b19d4e68c960afb04c3cb7ce0f2c75e917"} Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.331071 4970 scope.go:117] "RemoveContainer" containerID="2979bd1736832fd88369fe6fa0774d5a89365b127d29e506b0f004a7c4894a6d" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.331070 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wgkhn" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.362304 4970 scope.go:117] "RemoveContainer" containerID="c61b3e6bd8449c97c13621ee061f0d2a9b78a535441c8e5fb0ddfd4d7a4bcd92" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.373976 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wgkhn"] Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.386915 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wgkhn"] Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.416623 4970 scope.go:117] "RemoveContainer" containerID="3aefe4660de8c8801dbfc6de9a29c7a3b880024ef61a38fd0d34f1214e755795" Dec 09 12:53:47 crc kubenswrapper[4970]: I1209 12:53:47.830431 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" path="/var/lib/kubelet/pods/94a4b656-ce1b-4619-800c-5162727ba8a8/volumes" Dec 09 12:53:56 crc kubenswrapper[4970]: E1209 12:53:56.818694 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:53:58 crc kubenswrapper[4970]: E1209 12:53:58.817126 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:54:09 crc kubenswrapper[4970]: E1209 12:54:09.816591 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:54:12 crc kubenswrapper[4970]: E1209 12:54:12.818324 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:54:16 crc kubenswrapper[4970]: I1209 12:54:16.010940 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:54:16 crc kubenswrapper[4970]: I1209 12:54:16.011668 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:54:23 crc kubenswrapper[4970]: E1209 12:54:23.814625 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:54:26 crc kubenswrapper[4970]: E1209 12:54:26.814779 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:54:36 crc kubenswrapper[4970]: I1209 12:54:36.986536 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mkk85"] Dec 09 12:54:36 crc kubenswrapper[4970]: E1209 12:54:36.987882 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="extract-content" Dec 09 12:54:36 crc kubenswrapper[4970]: I1209 12:54:36.987896 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="extract-content" Dec 09 12:54:36 crc kubenswrapper[4970]: E1209 12:54:36.987936 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="extract-utilities" Dec 09 12:54:36 crc kubenswrapper[4970]: I1209 12:54:36.987943 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="extract-utilities" Dec 09 12:54:36 crc kubenswrapper[4970]: E1209 12:54:36.987966 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="registry-server" Dec 09 12:54:36 crc kubenswrapper[4970]: I1209 12:54:36.987972 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="registry-server" Dec 09 12:54:36 crc kubenswrapper[4970]: I1209 12:54:36.988169 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a4b656-ce1b-4619-800c-5162727ba8a8" containerName="registry-server" Dec 09 12:54:36 crc kubenswrapper[4970]: I1209 12:54:36.989773 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.003022 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkk85"] Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.028842 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-catalog-content\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.028943 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-utilities\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.028996 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrgx9\" (UniqueName: \"kubernetes.io/projected/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-kube-api-access-qrgx9\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.131328 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-utilities\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.131435 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrgx9\" (UniqueName: \"kubernetes.io/projected/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-kube-api-access-qrgx9\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.131634 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-catalog-content\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.131985 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-utilities\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.132064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-catalog-content\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.154920 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrgx9\" (UniqueName: \"kubernetes.io/projected/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-kube-api-access-qrgx9\") pod \"community-operators-mkk85\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.309633 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:37 crc kubenswrapper[4970]: I1209 12:54:37.854968 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mkk85"] Dec 09 12:54:38 crc kubenswrapper[4970]: I1209 12:54:38.019988 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerStarted","Data":"b2a621761aaac238dd5e47aff5ed031a06069da0d1fe462ba1239a2a10be762e"} Dec 09 12:54:38 crc kubenswrapper[4970]: E1209 12:54:38.816019 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:54:39 crc kubenswrapper[4970]: I1209 12:54:39.035825 4970 generic.go:334] "Generic (PLEG): container finished" podID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerID="134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6" exitCode=0 Dec 09 12:54:39 crc kubenswrapper[4970]: I1209 12:54:39.035862 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerDied","Data":"134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6"} Dec 09 12:54:41 crc kubenswrapper[4970]: I1209 12:54:41.063742 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerStarted","Data":"d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a"} Dec 09 12:54:41 crc kubenswrapper[4970]: E1209 12:54:41.814256 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:54:42 crc kubenswrapper[4970]: I1209 12:54:42.075733 4970 generic.go:334] "Generic (PLEG): container finished" podID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerID="d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a" exitCode=0 Dec 09 12:54:42 crc kubenswrapper[4970]: I1209 12:54:42.075782 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerDied","Data":"d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a"} Dec 09 12:54:44 crc kubenswrapper[4970]: I1209 12:54:44.099098 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerStarted","Data":"c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8"} Dec 09 12:54:44 crc kubenswrapper[4970]: I1209 12:54:44.120456 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mkk85" podStartSLOduration=4.413686673 podStartE2EDuration="8.120437673s" podCreationTimestamp="2025-12-09 12:54:36 +0000 UTC" firstStartedPulling="2025-12-09 12:54:39.038059234 +0000 UTC m=+2891.598540285" lastFinishedPulling="2025-12-09 12:54:42.744810234 +0000 UTC m=+2895.305291285" observedRunningTime="2025-12-09 12:54:44.116794945 +0000 UTC m=+2896.677275996" watchObservedRunningTime="2025-12-09 12:54:44.120437673 +0000 UTC m=+2896.680918734" Dec 09 12:54:46 crc kubenswrapper[4970]: I1209 12:54:46.011116 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:54:46 crc kubenswrapper[4970]: I1209 12:54:46.011512 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:54:47 crc kubenswrapper[4970]: I1209 12:54:47.310475 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:47 crc kubenswrapper[4970]: I1209 12:54:47.310812 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:47 crc kubenswrapper[4970]: I1209 12:54:47.392077 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:48 crc kubenswrapper[4970]: I1209 12:54:48.217018 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:48 crc kubenswrapper[4970]: I1209 12:54:48.317322 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkk85"] Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.160517 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mkk85" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="registry-server" containerID="cri-o://c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8" gracePeriod=2 Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.729476 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.880965 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-catalog-content\") pod \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.881072 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrgx9\" (UniqueName: \"kubernetes.io/projected/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-kube-api-access-qrgx9\") pod \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.881409 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-utilities\") pod \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\" (UID: \"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c\") " Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.882158 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-utilities" (OuterVolumeSpecName: "utilities") pod "f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" (UID: "f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.882944 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.890817 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-kube-api-access-qrgx9" (OuterVolumeSpecName: "kube-api-access-qrgx9") pod "f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" (UID: "f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c"). InnerVolumeSpecName "kube-api-access-qrgx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.939611 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" (UID: "f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.985431 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:54:50 crc kubenswrapper[4970]: I1209 12:54:50.985467 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrgx9\" (UniqueName: \"kubernetes.io/projected/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c-kube-api-access-qrgx9\") on node \"crc\" DevicePath \"\"" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.179814 4970 generic.go:334] "Generic (PLEG): container finished" podID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerID="c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8" exitCode=0 Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.179902 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerDied","Data":"c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8"} Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.179956 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mkk85" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.179984 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mkk85" event={"ID":"f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c","Type":"ContainerDied","Data":"b2a621761aaac238dd5e47aff5ed031a06069da0d1fe462ba1239a2a10be762e"} Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.180021 4970 scope.go:117] "RemoveContainer" containerID="c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.212043 4970 scope.go:117] "RemoveContainer" containerID="d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.236615 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mkk85"] Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.251806 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mkk85"] Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.257016 4970 scope.go:117] "RemoveContainer" containerID="134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.326178 4970 scope.go:117] "RemoveContainer" containerID="c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8" Dec 09 12:54:51 crc kubenswrapper[4970]: E1209 12:54:51.327614 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8\": container with ID starting with c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8 not found: ID does not exist" containerID="c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.327658 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8"} err="failed to get container status \"c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8\": rpc error: code = NotFound desc = could not find container \"c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8\": container with ID starting with c08de35caad2d46d1ca40b4e2cb4987ae5380e548dfe1d34ad43dd633186a4a8 not found: ID does not exist" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.327685 4970 scope.go:117] "RemoveContainer" containerID="d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a" Dec 09 12:54:51 crc kubenswrapper[4970]: E1209 12:54:51.327946 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a\": container with ID starting with d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a not found: ID does not exist" containerID="d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.327968 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a"} err="failed to get container status \"d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a\": rpc error: code = NotFound desc = could not find container \"d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a\": container with ID starting with d7adf6f64489e8e5c6a791658f5c56c24840b6afb0c1d6981fb8941b99f4a56a not found: ID does not exist" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.327981 4970 scope.go:117] "RemoveContainer" containerID="134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6" Dec 09 12:54:51 crc kubenswrapper[4970]: E1209 12:54:51.328351 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6\": container with ID starting with 134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6 not found: ID does not exist" containerID="134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.328386 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6"} err="failed to get container status \"134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6\": rpc error: code = NotFound desc = could not find container \"134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6\": container with ID starting with 134c4e5063bdce50c7514ff8aed30c808801cdf4ff1cf97f8e1adf3eb7e644c6 not found: ID does not exist" Dec 09 12:54:51 crc kubenswrapper[4970]: E1209 12:54:51.815618 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:54:51 crc kubenswrapper[4970]: I1209 12:54:51.824559 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" path="/var/lib/kubelet/pods/f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c/volumes" Dec 09 12:54:52 crc kubenswrapper[4970]: E1209 12:54:52.817955 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.120282 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vbkf7"] Dec 09 12:55:04 crc kubenswrapper[4970]: E1209 12:55:04.123055 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="registry-server" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.123202 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="registry-server" Dec 09 12:55:04 crc kubenswrapper[4970]: E1209 12:55:04.123372 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="extract-utilities" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.123473 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="extract-utilities" Dec 09 12:55:04 crc kubenswrapper[4970]: E1209 12:55:04.123568 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="extract-content" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.123654 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="extract-content" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.124032 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b97c3b-bc0f-412c-91e2-b4e1c7fb175c" containerName="registry-server" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.128480 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.149725 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbkf7"] Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.180633 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnt7\" (UniqueName: \"kubernetes.io/projected/12d78999-b0bb-4a36-8d6f-369f6e604935-kube-api-access-wtnt7\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.180735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-catalog-content\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.180776 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-utilities\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.282916 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnt7\" (UniqueName: \"kubernetes.io/projected/12d78999-b0bb-4a36-8d6f-369f6e604935-kube-api-access-wtnt7\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.282989 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-catalog-content\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.283021 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-utilities\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.283521 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-utilities\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.283663 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-catalog-content\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.317530 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnt7\" (UniqueName: \"kubernetes.io/projected/12d78999-b0bb-4a36-8d6f-369f6e604935-kube-api-access-wtnt7\") pod \"redhat-marketplace-vbkf7\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.464199 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:04 crc kubenswrapper[4970]: E1209 12:55:04.819841 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:55:04 crc kubenswrapper[4970]: I1209 12:55:04.893639 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbkf7"] Dec 09 12:55:05 crc kubenswrapper[4970]: I1209 12:55:05.338419 4970 generic.go:334] "Generic (PLEG): container finished" podID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerID="a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033" exitCode=0 Dec 09 12:55:05 crc kubenswrapper[4970]: I1209 12:55:05.338464 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbkf7" event={"ID":"12d78999-b0bb-4a36-8d6f-369f6e604935","Type":"ContainerDied","Data":"a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033"} Dec 09 12:55:05 crc kubenswrapper[4970]: I1209 12:55:05.338702 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbkf7" event={"ID":"12d78999-b0bb-4a36-8d6f-369f6e604935","Type":"ContainerStarted","Data":"a6714b3ef1e9f4bc815b814de9f9f712763f542def699673cc90120cfe05f777"} Dec 09 12:55:05 crc kubenswrapper[4970]: E1209 12:55:05.814933 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:55:07 crc kubenswrapper[4970]: I1209 12:55:07.367998 4970 generic.go:334] "Generic (PLEG): container finished" podID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerID="7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31" exitCode=0 Dec 09 12:55:07 crc kubenswrapper[4970]: I1209 12:55:07.368509 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbkf7" event={"ID":"12d78999-b0bb-4a36-8d6f-369f6e604935","Type":"ContainerDied","Data":"7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31"} Dec 09 12:55:08 crc kubenswrapper[4970]: I1209 12:55:08.382675 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbkf7" event={"ID":"12d78999-b0bb-4a36-8d6f-369f6e604935","Type":"ContainerStarted","Data":"364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740"} Dec 09 12:55:08 crc kubenswrapper[4970]: I1209 12:55:08.408121 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vbkf7" podStartSLOduration=1.893130725 podStartE2EDuration="4.408094846s" podCreationTimestamp="2025-12-09 12:55:04 +0000 UTC" firstStartedPulling="2025-12-09 12:55:05.340740165 +0000 UTC m=+2917.901221226" lastFinishedPulling="2025-12-09 12:55:07.855704296 +0000 UTC m=+2920.416185347" observedRunningTime="2025-12-09 12:55:08.401378655 +0000 UTC m=+2920.961859706" watchObservedRunningTime="2025-12-09 12:55:08.408094846 +0000 UTC m=+2920.968575897" Dec 09 12:55:14 crc kubenswrapper[4970]: I1209 12:55:14.465229 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:14 crc kubenswrapper[4970]: I1209 12:55:14.465959 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:14 crc kubenswrapper[4970]: I1209 12:55:14.521219 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:14 crc kubenswrapper[4970]: I1209 12:55:14.583612 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:14 crc kubenswrapper[4970]: I1209 12:55:14.762492 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbkf7"] Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.011486 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.011957 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.012027 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.013568 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.013696 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" gracePeriod=600 Dec 09 12:55:16 crc kubenswrapper[4970]: E1209 12:55:16.142563 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.542030 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" exitCode=0 Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.542134 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74"} Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.542226 4970 scope.go:117] "RemoveContainer" containerID="cb7e8adc348c4ec07e615d344d935f9200b7a44c97bbc9039ef571da3563f276" Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.542382 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vbkf7" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="registry-server" containerID="cri-o://364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740" gracePeriod=2 Dec 09 12:55:16 crc kubenswrapper[4970]: I1209 12:55:16.543782 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:55:16 crc kubenswrapper[4970]: E1209 12:55:16.544427 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.059230 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.176655 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-utilities\") pod \"12d78999-b0bb-4a36-8d6f-369f6e604935\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.176954 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-catalog-content\") pod \"12d78999-b0bb-4a36-8d6f-369f6e604935\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.177103 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtnt7\" (UniqueName: \"kubernetes.io/projected/12d78999-b0bb-4a36-8d6f-369f6e604935-kube-api-access-wtnt7\") pod \"12d78999-b0bb-4a36-8d6f-369f6e604935\" (UID: \"12d78999-b0bb-4a36-8d6f-369f6e604935\") " Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.177687 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-utilities" (OuterVolumeSpecName: "utilities") pod "12d78999-b0bb-4a36-8d6f-369f6e604935" (UID: "12d78999-b0bb-4a36-8d6f-369f6e604935"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.178384 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.186742 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d78999-b0bb-4a36-8d6f-369f6e604935-kube-api-access-wtnt7" (OuterVolumeSpecName: "kube-api-access-wtnt7") pod "12d78999-b0bb-4a36-8d6f-369f6e604935" (UID: "12d78999-b0bb-4a36-8d6f-369f6e604935"). InnerVolumeSpecName "kube-api-access-wtnt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.198152 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12d78999-b0bb-4a36-8d6f-369f6e604935" (UID: "12d78999-b0bb-4a36-8d6f-369f6e604935"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.279738 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12d78999-b0bb-4a36-8d6f-369f6e604935-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.279774 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtnt7\" (UniqueName: \"kubernetes.io/projected/12d78999-b0bb-4a36-8d6f-369f6e604935-kube-api-access-wtnt7\") on node \"crc\" DevicePath \"\"" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.556949 4970 generic.go:334] "Generic (PLEG): container finished" podID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerID="364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740" exitCode=0 Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.557007 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbkf7" event={"ID":"12d78999-b0bb-4a36-8d6f-369f6e604935","Type":"ContainerDied","Data":"364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740"} Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.557390 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vbkf7" event={"ID":"12d78999-b0bb-4a36-8d6f-369f6e604935","Type":"ContainerDied","Data":"a6714b3ef1e9f4bc815b814de9f9f712763f542def699673cc90120cfe05f777"} Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.557417 4970 scope.go:117] "RemoveContainer" containerID="364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.557061 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vbkf7" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.581237 4970 scope.go:117] "RemoveContainer" containerID="7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.594929 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbkf7"] Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.607664 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vbkf7"] Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.625771 4970 scope.go:117] "RemoveContainer" containerID="a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.674578 4970 scope.go:117] "RemoveContainer" containerID="364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740" Dec 09 12:55:17 crc kubenswrapper[4970]: E1209 12:55:17.675063 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740\": container with ID starting with 364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740 not found: ID does not exist" containerID="364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.675116 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740"} err="failed to get container status \"364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740\": rpc error: code = NotFound desc = could not find container \"364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740\": container with ID starting with 364291533f3b7baa0b2f9c7aa94bdb237dfacbe247768155096f73fe4c0d7740 not found: ID does not exist" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.675159 4970 scope.go:117] "RemoveContainer" containerID="7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31" Dec 09 12:55:17 crc kubenswrapper[4970]: E1209 12:55:17.675551 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31\": container with ID starting with 7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31 not found: ID does not exist" containerID="7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.675582 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31"} err="failed to get container status \"7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31\": rpc error: code = NotFound desc = could not find container \"7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31\": container with ID starting with 7f1a54c57115581b6138421c2e671f63864912e68ddaa16c303bdf25efd53e31 not found: ID does not exist" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.675601 4970 scope.go:117] "RemoveContainer" containerID="a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033" Dec 09 12:55:17 crc kubenswrapper[4970]: E1209 12:55:17.675976 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033\": container with ID starting with a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033 not found: ID does not exist" containerID="a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.676005 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033"} err="failed to get container status \"a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033\": rpc error: code = NotFound desc = could not find container \"a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033\": container with ID starting with a3d2aba5bc7688bf82a7a1fe4a3b15a844cfc4903178c0e6ddd16b36053cf033 not found: ID does not exist" Dec 09 12:55:17 crc kubenswrapper[4970]: I1209 12:55:17.834924 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" path="/var/lib/kubelet/pods/12d78999-b0bb-4a36-8d6f-369f6e604935/volumes" Dec 09 12:55:18 crc kubenswrapper[4970]: E1209 12:55:18.815346 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:55:19 crc kubenswrapper[4970]: E1209 12:55:19.815400 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:55:29 crc kubenswrapper[4970]: I1209 12:55:29.813383 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:55:29 crc kubenswrapper[4970]: E1209 12:55:29.814260 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:55:33 crc kubenswrapper[4970]: E1209 12:55:33.814625 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:55:34 crc kubenswrapper[4970]: E1209 12:55:34.815746 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:55:42 crc kubenswrapper[4970]: I1209 12:55:42.812798 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:55:42 crc kubenswrapper[4970]: E1209 12:55:42.813654 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:55:45 crc kubenswrapper[4970]: E1209 12:55:45.816905 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:55:46 crc kubenswrapper[4970]: E1209 12:55:46.815593 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:55:56 crc kubenswrapper[4970]: I1209 12:55:56.813419 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:55:56 crc kubenswrapper[4970]: E1209 12:55:56.814128 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:55:58 crc kubenswrapper[4970]: E1209 12:55:58.815765 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:56:00 crc kubenswrapper[4970]: E1209 12:56:00.813976 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:56:09 crc kubenswrapper[4970]: I1209 12:56:09.813966 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:56:09 crc kubenswrapper[4970]: E1209 12:56:09.815048 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:56:13 crc kubenswrapper[4970]: E1209 12:56:13.815253 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:56:15 crc kubenswrapper[4970]: E1209 12:56:15.816408 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:56:22 crc kubenswrapper[4970]: I1209 12:56:22.813091 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:56:22 crc kubenswrapper[4970]: E1209 12:56:22.814028 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:56:26 crc kubenswrapper[4970]: E1209 12:56:26.815764 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:56:28 crc kubenswrapper[4970]: E1209 12:56:28.814702 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:56:37 crc kubenswrapper[4970]: I1209 12:56:37.820443 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:56:37 crc kubenswrapper[4970]: E1209 12:56:37.821459 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:56:37 crc kubenswrapper[4970]: E1209 12:56:37.821953 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:56:41 crc kubenswrapper[4970]: E1209 12:56:41.815534 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:56:50 crc kubenswrapper[4970]: E1209 12:56:50.815968 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:56:52 crc kubenswrapper[4970]: I1209 12:56:52.813621 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:56:52 crc kubenswrapper[4970]: E1209 12:56:52.814816 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:56:53 crc kubenswrapper[4970]: E1209 12:56:53.814069 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:57:05 crc kubenswrapper[4970]: E1209 12:57:05.815372 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:57:05 crc kubenswrapper[4970]: E1209 12:57:05.815475 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:57:07 crc kubenswrapper[4970]: I1209 12:57:07.822067 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:57:07 crc kubenswrapper[4970]: E1209 12:57:07.822822 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:57:18 crc kubenswrapper[4970]: I1209 12:57:18.814645 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:57:18 crc kubenswrapper[4970]: E1209 12:57:18.815627 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:57:18 crc kubenswrapper[4970]: E1209 12:57:18.818935 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:57:20 crc kubenswrapper[4970]: E1209 12:57:20.815159 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:57:33 crc kubenswrapper[4970]: I1209 12:57:33.814232 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:57:33 crc kubenswrapper[4970]: E1209 12:57:33.815374 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:57:33 crc kubenswrapper[4970]: E1209 12:57:33.816180 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:57:34 crc kubenswrapper[4970]: E1209 12:57:34.815243 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:57:46 crc kubenswrapper[4970]: I1209 12:57:46.443064 4970 generic.go:334] "Generic (PLEG): container finished" podID="ab4e637c-e74e-4e8b-9d81-98eadd755fc3" containerID="a7a45005ba387247d6e88ec9a67ade769da19da35f3f6b45239334ccb2f1c523" exitCode=2 Dec 09 12:57:46 crc kubenswrapper[4970]: I1209 12:57:46.443235 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" event={"ID":"ab4e637c-e74e-4e8b-9d81-98eadd755fc3","Type":"ContainerDied","Data":"a7a45005ba387247d6e88ec9a67ade769da19da35f3f6b45239334ccb2f1c523"} Dec 09 12:57:46 crc kubenswrapper[4970]: I1209 12:57:46.814688 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:57:46 crc kubenswrapper[4970]: E1209 12:57:46.815692 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:57:47 crc kubenswrapper[4970]: E1209 12:57:47.825064 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:57:47 crc kubenswrapper[4970]: I1209 12:57:47.938028 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.021706 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-ssh-key\") pod \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.063208 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ab4e637c-e74e-4e8b-9d81-98eadd755fc3" (UID: "ab4e637c-e74e-4e8b-9d81-98eadd755fc3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.123350 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-inventory\") pod \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.123776 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h276l\" (UniqueName: \"kubernetes.io/projected/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-kube-api-access-h276l\") pod \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\" (UID: \"ab4e637c-e74e-4e8b-9d81-98eadd755fc3\") " Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.124654 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.128374 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-kube-api-access-h276l" (OuterVolumeSpecName: "kube-api-access-h276l") pod "ab4e637c-e74e-4e8b-9d81-98eadd755fc3" (UID: "ab4e637c-e74e-4e8b-9d81-98eadd755fc3"). InnerVolumeSpecName "kube-api-access-h276l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.162377 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-inventory" (OuterVolumeSpecName: "inventory") pod "ab4e637c-e74e-4e8b-9d81-98eadd755fc3" (UID: "ab4e637c-e74e-4e8b-9d81-98eadd755fc3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.225548 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.225588 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h276l\" (UniqueName: \"kubernetes.io/projected/ab4e637c-e74e-4e8b-9d81-98eadd755fc3-kube-api-access-h276l\") on node \"crc\" DevicePath \"\"" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.466890 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" event={"ID":"ab4e637c-e74e-4e8b-9d81-98eadd755fc3","Type":"ContainerDied","Data":"efb560909923111f93983076c6794effff58d758cf3eaddda699a8c2cfb46fd8"} Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.466937 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efb560909923111f93983076c6794effff58d758cf3eaddda699a8c2cfb46fd8" Dec 09 12:57:48 crc kubenswrapper[4970]: I1209 12:57:48.467016 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-b72h6" Dec 09 12:57:48 crc kubenswrapper[4970]: E1209 12:57:48.814690 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:57:57 crc kubenswrapper[4970]: I1209 12:57:57.827070 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:57:57 crc kubenswrapper[4970]: E1209 12:57:57.828074 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:58:02 crc kubenswrapper[4970]: E1209 12:58:02.816238 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:58:02 crc kubenswrapper[4970]: E1209 12:58:02.816620 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:58:10 crc kubenswrapper[4970]: I1209 12:58:10.813960 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:58:10 crc kubenswrapper[4970]: E1209 12:58:10.814931 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:58:14 crc kubenswrapper[4970]: E1209 12:58:14.819262 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:58:15 crc kubenswrapper[4970]: E1209 12:58:15.816175 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:58:22 crc kubenswrapper[4970]: I1209 12:58:22.813685 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:58:22 crc kubenswrapper[4970]: E1209 12:58:22.815163 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.045573 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7"] Dec 09 12:58:26 crc kubenswrapper[4970]: E1209 12:58:26.046427 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="extract-content" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.046444 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="extract-content" Dec 09 12:58:26 crc kubenswrapper[4970]: E1209 12:58:26.046467 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="extract-utilities" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.046475 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="extract-utilities" Dec 09 12:58:26 crc kubenswrapper[4970]: E1209 12:58:26.046491 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4e637c-e74e-4e8b-9d81-98eadd755fc3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.046498 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4e637c-e74e-4e8b-9d81-98eadd755fc3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:58:26 crc kubenswrapper[4970]: E1209 12:58:26.046526 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="registry-server" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.046533 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="registry-server" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.046788 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d78999-b0bb-4a36-8d6f-369f6e604935" containerName="registry-server" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.046807 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4e637c-e74e-4e8b-9d81-98eadd755fc3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.047756 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.052962 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.053211 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.053383 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.053594 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.070026 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7"] Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.094286 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjbzj\" (UniqueName: \"kubernetes.io/projected/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-kube-api-access-xjbzj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.094481 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.094564 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.197921 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.198157 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjbzj\" (UniqueName: \"kubernetes.io/projected/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-kube-api-access-xjbzj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.198426 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.205181 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.205380 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.228180 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjbzj\" (UniqueName: \"kubernetes.io/projected/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-kube-api-access-xjbzj\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:26 crc kubenswrapper[4970]: I1209 12:58:26.377822 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 12:58:27 crc kubenswrapper[4970]: I1209 12:58:27.018486 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7"] Dec 09 12:58:27 crc kubenswrapper[4970]: W1209 12:58:27.019564 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcbd4a48_8e44_4b01_b6fb_efeb8e35c03c.slice/crio-783ea4ae17be458360a58e8350489c95c592ff28635ce1d3e4ba713323e834a0 WatchSource:0}: Error finding container 783ea4ae17be458360a58e8350489c95c592ff28635ce1d3e4ba713323e834a0: Status 404 returned error can't find the container with id 783ea4ae17be458360a58e8350489c95c592ff28635ce1d3e4ba713323e834a0 Dec 09 12:58:27 crc kubenswrapper[4970]: I1209 12:58:27.024610 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 12:58:27 crc kubenswrapper[4970]: I1209 12:58:27.959097 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" event={"ID":"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c","Type":"ContainerStarted","Data":"5721d7399f1f75a5a6a6aa2415889833494565f7e344887e272916788929705e"} Dec 09 12:58:27 crc kubenswrapper[4970]: I1209 12:58:27.959443 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" event={"ID":"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c","Type":"ContainerStarted","Data":"783ea4ae17be458360a58e8350489c95c592ff28635ce1d3e4ba713323e834a0"} Dec 09 12:58:27 crc kubenswrapper[4970]: E1209 12:58:27.977603 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:58:27 crc kubenswrapper[4970]: E1209 12:58:27.977657 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 12:58:27 crc kubenswrapper[4970]: E1209 12:58:27.977779 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:58:27 crc kubenswrapper[4970]: E1209 12:58:27.979169 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:58:27 crc kubenswrapper[4970]: I1209 12:58:27.986974 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" podStartSLOduration=1.570097966 podStartE2EDuration="1.986953947s" podCreationTimestamp="2025-12-09 12:58:26 +0000 UTC" firstStartedPulling="2025-12-09 12:58:27.023546043 +0000 UTC m=+3119.584027134" lastFinishedPulling="2025-12-09 12:58:27.440402064 +0000 UTC m=+3120.000883115" observedRunningTime="2025-12-09 12:58:27.974624933 +0000 UTC m=+3120.535105984" watchObservedRunningTime="2025-12-09 12:58:27.986953947 +0000 UTC m=+3120.547434998" Dec 09 12:58:28 crc kubenswrapper[4970]: E1209 12:58:28.815174 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:58:36 crc kubenswrapper[4970]: I1209 12:58:36.813217 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:58:36 crc kubenswrapper[4970]: E1209 12:58:36.814270 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:58:40 crc kubenswrapper[4970]: E1209 12:58:40.817273 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:58:41 crc kubenswrapper[4970]: E1209 12:58:41.814661 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:58:51 crc kubenswrapper[4970]: I1209 12:58:51.814284 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:58:51 crc kubenswrapper[4970]: E1209 12:58:51.815150 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:58:51 crc kubenswrapper[4970]: E1209 12:58:51.815196 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:58:55 crc kubenswrapper[4970]: E1209 12:58:55.948119 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:58:55 crc kubenswrapper[4970]: E1209 12:58:55.948616 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 12:58:55 crc kubenswrapper[4970]: E1209 12:58:55.948742 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 12:58:55 crc kubenswrapper[4970]: E1209 12:58:55.950299 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:59:04 crc kubenswrapper[4970]: E1209 12:59:04.815714 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:59:06 crc kubenswrapper[4970]: I1209 12:59:06.813718 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:59:06 crc kubenswrapper[4970]: E1209 12:59:06.814410 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:59:06 crc kubenswrapper[4970]: E1209 12:59:06.816577 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:59:16 crc kubenswrapper[4970]: E1209 12:59:16.814686 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:59:18 crc kubenswrapper[4970]: I1209 12:59:18.813064 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:59:18 crc kubenswrapper[4970]: E1209 12:59:18.813568 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:59:19 crc kubenswrapper[4970]: E1209 12:59:19.814713 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:59:27 crc kubenswrapper[4970]: E1209 12:59:27.822076 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:59:29 crc kubenswrapper[4970]: I1209 12:59:29.812843 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:59:29 crc kubenswrapper[4970]: E1209 12:59:29.813612 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:59:34 crc kubenswrapper[4970]: E1209 12:59:34.816786 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:59:41 crc kubenswrapper[4970]: I1209 12:59:41.813132 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:59:41 crc kubenswrapper[4970]: E1209 12:59:41.814111 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:59:42 crc kubenswrapper[4970]: E1209 12:59:42.815460 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 12:59:49 crc kubenswrapper[4970]: E1209 12:59:49.818653 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 12:59:52 crc kubenswrapper[4970]: I1209 12:59:52.813492 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 12:59:52 crc kubenswrapper[4970]: E1209 12:59:52.814506 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 12:59:56 crc kubenswrapper[4970]: E1209 12:59:56.814273 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.156572 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm"] Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.159023 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.161471 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.161471 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.171635 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm"] Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.189720 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-config-volume\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.189801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgs22\" (UniqueName: \"kubernetes.io/projected/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-kube-api-access-fgs22\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.190125 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-secret-volume\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.292287 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-config-volume\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.292349 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgs22\" (UniqueName: \"kubernetes.io/projected/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-kube-api-access-fgs22\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.292508 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-secret-volume\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.294551 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-config-volume\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.304964 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-secret-volume\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.309344 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgs22\" (UniqueName: \"kubernetes.io/projected/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-kube-api-access-fgs22\") pod \"collect-profiles-29421420-jt5nm\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.489456 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:00 crc kubenswrapper[4970]: W1209 13:00:00.980405 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d90aed_bbd6_4bf3_8fc0_34d12dcc0bf0.slice/crio-5888be7e838c889bdd84ed2fc71280d70ad17b3bfc006ef4e94174dcd6d29827 WatchSource:0}: Error finding container 5888be7e838c889bdd84ed2fc71280d70ad17b3bfc006ef4e94174dcd6d29827: Status 404 returned error can't find the container with id 5888be7e838c889bdd84ed2fc71280d70ad17b3bfc006ef4e94174dcd6d29827 Dec 09 13:00:00 crc kubenswrapper[4970]: I1209 13:00:00.983580 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm"] Dec 09 13:00:01 crc kubenswrapper[4970]: I1209 13:00:01.537107 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" event={"ID":"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0","Type":"ContainerStarted","Data":"eba6e121bdbba57194676c374f20defa188210fb6210b87fe878c4549a96bb24"} Dec 09 13:00:01 crc kubenswrapper[4970]: I1209 13:00:01.537523 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" event={"ID":"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0","Type":"ContainerStarted","Data":"5888be7e838c889bdd84ed2fc71280d70ad17b3bfc006ef4e94174dcd6d29827"} Dec 09 13:00:01 crc kubenswrapper[4970]: I1209 13:00:01.570123 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" podStartSLOduration=1.570098268 podStartE2EDuration="1.570098268s" podCreationTimestamp="2025-12-09 13:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 13:00:01.56832762 +0000 UTC m=+3214.128808711" watchObservedRunningTime="2025-12-09 13:00:01.570098268 +0000 UTC m=+3214.130579339" Dec 09 13:00:02 crc kubenswrapper[4970]: I1209 13:00:02.547657 4970 generic.go:334] "Generic (PLEG): container finished" podID="58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" containerID="eba6e121bdbba57194676c374f20defa188210fb6210b87fe878c4549a96bb24" exitCode=0 Dec 09 13:00:02 crc kubenswrapper[4970]: I1209 13:00:02.548031 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" event={"ID":"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0","Type":"ContainerDied","Data":"eba6e121bdbba57194676c374f20defa188210fb6210b87fe878c4549a96bb24"} Dec 09 13:00:03 crc kubenswrapper[4970]: E1209 13:00:03.814421 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:00:03 crc kubenswrapper[4970]: I1209 13:00:03.994872 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.089705 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgs22\" (UniqueName: \"kubernetes.io/projected/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-kube-api-access-fgs22\") pod \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.089805 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-secret-volume\") pod \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.089847 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-config-volume\") pod \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\" (UID: \"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0\") " Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.090939 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-config-volume" (OuterVolumeSpecName: "config-volume") pod "58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" (UID: "58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.097916 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" (UID: "58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.100446 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-kube-api-access-fgs22" (OuterVolumeSpecName: "kube-api-access-fgs22") pod "58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" (UID: "58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0"). InnerVolumeSpecName "kube-api-access-fgs22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.194888 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.194933 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.194947 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fgs22\" (UniqueName: \"kubernetes.io/projected/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0-kube-api-access-fgs22\") on node \"crc\" DevicePath \"\"" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.568916 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" event={"ID":"58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0","Type":"ContainerDied","Data":"5888be7e838c889bdd84ed2fc71280d70ad17b3bfc006ef4e94174dcd6d29827"} Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.568981 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm" Dec 09 13:00:04 crc kubenswrapper[4970]: I1209 13:00:04.568986 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5888be7e838c889bdd84ed2fc71280d70ad17b3bfc006ef4e94174dcd6d29827" Dec 09 13:00:05 crc kubenswrapper[4970]: I1209 13:00:05.079578 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58"] Dec 09 13:00:05 crc kubenswrapper[4970]: I1209 13:00:05.090117 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421375-nmd58"] Dec 09 13:00:05 crc kubenswrapper[4970]: I1209 13:00:05.819514 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 13:00:05 crc kubenswrapper[4970]: E1209 13:00:05.819965 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:00:05 crc kubenswrapper[4970]: I1209 13:00:05.832754 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ab3e44-e399-4021-8cef-f80b2a17b3d8" path="/var/lib/kubelet/pods/29ab3e44-e399-4021-8cef-f80b2a17b3d8/volumes" Dec 09 13:00:11 crc kubenswrapper[4970]: E1209 13:00:11.818105 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:00:17 crc kubenswrapper[4970]: E1209 13:00:17.821351 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:00:20 crc kubenswrapper[4970]: I1209 13:00:20.812700 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 13:00:21 crc kubenswrapper[4970]: I1209 13:00:21.742701 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"d708e8d693e60cdc3ddfc0e7ba17ffc03a57e486fe1510c7c0c539f931a60d30"} Dec 09 13:00:26 crc kubenswrapper[4970]: E1209 13:00:26.814871 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:00:28 crc kubenswrapper[4970]: E1209 13:00:28.815893 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:00:38 crc kubenswrapper[4970]: E1209 13:00:38.814192 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:00:39 crc kubenswrapper[4970]: E1209 13:00:39.814202 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:00:41 crc kubenswrapper[4970]: I1209 13:00:41.720235 4970 scope.go:117] "RemoveContainer" containerID="a8a0d24c0d4264af47ab5a828a00ef11fd5760845ca5549ef66021d9727c6ad4" Dec 09 13:00:52 crc kubenswrapper[4970]: E1209 13:00:52.814124 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:00:53 crc kubenswrapper[4970]: E1209 13:00:53.815184 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.168519 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421421-fgrd7"] Dec 09 13:01:00 crc kubenswrapper[4970]: E1209 13:01:00.169573 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" containerName="collect-profiles" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.169591 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" containerName="collect-profiles" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.169927 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" containerName="collect-profiles" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.170993 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.185853 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421421-fgrd7"] Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.265423 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brl5l\" (UniqueName: \"kubernetes.io/projected/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-kube-api-access-brl5l\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.265498 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-config-data\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.265687 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-combined-ca-bundle\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.265769 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-fernet-keys\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.368455 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brl5l\" (UniqueName: \"kubernetes.io/projected/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-kube-api-access-brl5l\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.368568 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-config-data\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.370017 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-combined-ca-bundle\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.370368 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-fernet-keys\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.377541 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-fernet-keys\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.377591 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-combined-ca-bundle\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.384191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-config-data\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.388736 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brl5l\" (UniqueName: \"kubernetes.io/projected/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-kube-api-access-brl5l\") pod \"keystone-cron-29421421-fgrd7\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:00 crc kubenswrapper[4970]: I1209 13:01:00.509489 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:01 crc kubenswrapper[4970]: I1209 13:01:01.012307 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421421-fgrd7"] Dec 09 13:01:01 crc kubenswrapper[4970]: I1209 13:01:01.185416 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-fgrd7" event={"ID":"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8","Type":"ContainerStarted","Data":"a3ff1aa3d4285bdc711bb678f423d1a2c5ddcb5d6c7e005347995626ef2d2045"} Dec 09 13:01:02 crc kubenswrapper[4970]: I1209 13:01:02.200790 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-fgrd7" event={"ID":"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8","Type":"ContainerStarted","Data":"aca1aa6db8b279e76492071e9446220791f06866a239fa2779fb0dd1672e82f9"} Dec 09 13:01:02 crc kubenswrapper[4970]: I1209 13:01:02.245612 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421421-fgrd7" podStartSLOduration=2.245591347 podStartE2EDuration="2.245591347s" podCreationTimestamp="2025-12-09 13:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 13:01:02.239498122 +0000 UTC m=+3274.799979203" watchObservedRunningTime="2025-12-09 13:01:02.245591347 +0000 UTC m=+3274.806072398" Dec 09 13:01:04 crc kubenswrapper[4970]: I1209 13:01:04.221465 4970 generic.go:334] "Generic (PLEG): container finished" podID="8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" containerID="aca1aa6db8b279e76492071e9446220791f06866a239fa2779fb0dd1672e82f9" exitCode=0 Dec 09 13:01:04 crc kubenswrapper[4970]: I1209 13:01:04.221905 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-fgrd7" event={"ID":"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8","Type":"ContainerDied","Data":"aca1aa6db8b279e76492071e9446220791f06866a239fa2779fb0dd1672e82f9"} Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.677865 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.832946 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brl5l\" (UniqueName: \"kubernetes.io/projected/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-kube-api-access-brl5l\") pod \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.833055 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-combined-ca-bundle\") pod \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.833168 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-fernet-keys\") pod \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.833232 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-config-data\") pod \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\" (UID: \"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8\") " Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.840411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" (UID: "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.841835 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-kube-api-access-brl5l" (OuterVolumeSpecName: "kube-api-access-brl5l") pod "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" (UID: "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8"). InnerVolumeSpecName "kube-api-access-brl5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.867298 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" (UID: "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.920694 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-config-data" (OuterVolumeSpecName: "config-data") pod "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" (UID: "8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.936230 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.936280 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brl5l\" (UniqueName: \"kubernetes.io/projected/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-kube-api-access-brl5l\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.936294 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:05 crc kubenswrapper[4970]: I1209 13:01:05.936303 4970 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 13:01:06 crc kubenswrapper[4970]: I1209 13:01:06.246521 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421421-fgrd7" event={"ID":"8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8","Type":"ContainerDied","Data":"a3ff1aa3d4285bdc711bb678f423d1a2c5ddcb5d6c7e005347995626ef2d2045"} Dec 09 13:01:06 crc kubenswrapper[4970]: I1209 13:01:06.246550 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421421-fgrd7" Dec 09 13:01:06 crc kubenswrapper[4970]: I1209 13:01:06.246572 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ff1aa3d4285bdc711bb678f423d1a2c5ddcb5d6c7e005347995626ef2d2045" Dec 09 13:01:07 crc kubenswrapper[4970]: E1209 13:01:07.829077 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:01:08 crc kubenswrapper[4970]: E1209 13:01:08.815650 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:01:21 crc kubenswrapper[4970]: E1209 13:01:21.817316 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:01:22 crc kubenswrapper[4970]: E1209 13:01:22.816158 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:01:34 crc kubenswrapper[4970]: E1209 13:01:34.817402 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:01:34 crc kubenswrapper[4970]: E1209 13:01:34.818354 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:01:45 crc kubenswrapper[4970]: E1209 13:01:45.816409 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:01:46 crc kubenswrapper[4970]: E1209 13:01:46.815046 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:01:59 crc kubenswrapper[4970]: E1209 13:01:59.814743 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:01:59 crc kubenswrapper[4970]: E1209 13:01:59.815541 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:02:10 crc kubenswrapper[4970]: E1209 13:02:10.814944 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:02:11 crc kubenswrapper[4970]: E1209 13:02:11.818198 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.277171 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6ldgm"] Dec 09 13:02:20 crc kubenswrapper[4970]: E1209 13:02:20.278795 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" containerName="keystone-cron" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.278821 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" containerName="keystone-cron" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.279296 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8" containerName="keystone-cron" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.285630 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.301096 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ldgm"] Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.423353 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-catalog-content\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.423427 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqdcw\" (UniqueName: \"kubernetes.io/projected/8bb92a54-3a68-41f0-9774-1e36686151c3-kube-api-access-sqdcw\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.423459 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-utilities\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.525960 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-catalog-content\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.526541 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-catalog-content\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.526737 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqdcw\" (UniqueName: \"kubernetes.io/projected/8bb92a54-3a68-41f0-9774-1e36686151c3-kube-api-access-sqdcw\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.527206 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-utilities\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.527557 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-utilities\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.567277 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqdcw\" (UniqueName: \"kubernetes.io/projected/8bb92a54-3a68-41f0-9774-1e36686151c3-kube-api-access-sqdcw\") pod \"redhat-operators-6ldgm\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:20 crc kubenswrapper[4970]: I1209 13:02:20.626425 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:21 crc kubenswrapper[4970]: I1209 13:02:21.178103 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6ldgm"] Dec 09 13:02:21 crc kubenswrapper[4970]: W1209 13:02:21.188144 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bb92a54_3a68_41f0_9774_1e36686151c3.slice/crio-ec38dffbf6b99cff880c69affbdd297939391621c830297d6e79a24569e36d28 WatchSource:0}: Error finding container ec38dffbf6b99cff880c69affbdd297939391621c830297d6e79a24569e36d28: Status 404 returned error can't find the container with id ec38dffbf6b99cff880c69affbdd297939391621c830297d6e79a24569e36d28 Dec 09 13:02:21 crc kubenswrapper[4970]: I1209 13:02:21.217891 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerStarted","Data":"ec38dffbf6b99cff880c69affbdd297939391621c830297d6e79a24569e36d28"} Dec 09 13:02:22 crc kubenswrapper[4970]: I1209 13:02:22.231441 4970 generic.go:334] "Generic (PLEG): container finished" podID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerID="a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5" exitCode=0 Dec 09 13:02:22 crc kubenswrapper[4970]: I1209 13:02:22.231738 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerDied","Data":"a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5"} Dec 09 13:02:24 crc kubenswrapper[4970]: I1209 13:02:24.271929 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerStarted","Data":"40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da"} Dec 09 13:02:24 crc kubenswrapper[4970]: E1209 13:02:24.815341 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:02:24 crc kubenswrapper[4970]: E1209 13:02:24.815360 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:02:30 crc kubenswrapper[4970]: I1209 13:02:30.363096 4970 generic.go:334] "Generic (PLEG): container finished" podID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerID="40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da" exitCode=0 Dec 09 13:02:30 crc kubenswrapper[4970]: I1209 13:02:30.363194 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerDied","Data":"40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da"} Dec 09 13:02:31 crc kubenswrapper[4970]: I1209 13:02:31.376766 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerStarted","Data":"e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee"} Dec 09 13:02:31 crc kubenswrapper[4970]: I1209 13:02:31.420736 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6ldgm" podStartSLOduration=2.901507007 podStartE2EDuration="11.420707692s" podCreationTimestamp="2025-12-09 13:02:20 +0000 UTC" firstStartedPulling="2025-12-09 13:02:22.235077847 +0000 UTC m=+3354.795558898" lastFinishedPulling="2025-12-09 13:02:30.754278462 +0000 UTC m=+3363.314759583" observedRunningTime="2025-12-09 13:02:31.411999846 +0000 UTC m=+3363.972480917" watchObservedRunningTime="2025-12-09 13:02:31.420707692 +0000 UTC m=+3363.981188753" Dec 09 13:02:36 crc kubenswrapper[4970]: E1209 13:02:36.815581 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:02:37 crc kubenswrapper[4970]: E1209 13:02:37.824932 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:02:40 crc kubenswrapper[4970]: I1209 13:02:40.627141 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:40 crc kubenswrapper[4970]: I1209 13:02:40.629014 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:40 crc kubenswrapper[4970]: I1209 13:02:40.700580 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:41 crc kubenswrapper[4970]: I1209 13:02:41.554124 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:41 crc kubenswrapper[4970]: I1209 13:02:41.606700 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ldgm"] Dec 09 13:02:43 crc kubenswrapper[4970]: I1209 13:02:43.500052 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6ldgm" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="registry-server" containerID="cri-o://e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee" gracePeriod=2 Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.030567 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.166772 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-catalog-content\") pod \"8bb92a54-3a68-41f0-9774-1e36686151c3\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.167037 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-utilities\") pod \"8bb92a54-3a68-41f0-9774-1e36686151c3\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.167058 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqdcw\" (UniqueName: \"kubernetes.io/projected/8bb92a54-3a68-41f0-9774-1e36686151c3-kube-api-access-sqdcw\") pod \"8bb92a54-3a68-41f0-9774-1e36686151c3\" (UID: \"8bb92a54-3a68-41f0-9774-1e36686151c3\") " Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.168922 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-utilities" (OuterVolumeSpecName: "utilities") pod "8bb92a54-3a68-41f0-9774-1e36686151c3" (UID: "8bb92a54-3a68-41f0-9774-1e36686151c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.176569 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb92a54-3a68-41f0-9774-1e36686151c3-kube-api-access-sqdcw" (OuterVolumeSpecName: "kube-api-access-sqdcw") pod "8bb92a54-3a68-41f0-9774-1e36686151c3" (UID: "8bb92a54-3a68-41f0-9774-1e36686151c3"). InnerVolumeSpecName "kube-api-access-sqdcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.177212 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.177235 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqdcw\" (UniqueName: \"kubernetes.io/projected/8bb92a54-3a68-41f0-9774-1e36686151c3-kube-api-access-sqdcw\") on node \"crc\" DevicePath \"\"" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.304661 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bb92a54-3a68-41f0-9774-1e36686151c3" (UID: "8bb92a54-3a68-41f0-9774-1e36686151c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.382934 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb92a54-3a68-41f0-9774-1e36686151c3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.511198 4970 generic.go:334] "Generic (PLEG): container finished" podID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerID="e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee" exitCode=0 Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.511237 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerDied","Data":"e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee"} Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.511290 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6ldgm" event={"ID":"8bb92a54-3a68-41f0-9774-1e36686151c3","Type":"ContainerDied","Data":"ec38dffbf6b99cff880c69affbdd297939391621c830297d6e79a24569e36d28"} Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.511301 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6ldgm" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.511306 4970 scope.go:117] "RemoveContainer" containerID="e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.540432 4970 scope.go:117] "RemoveContainer" containerID="40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.565184 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6ldgm"] Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.579577 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6ldgm"] Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.585174 4970 scope.go:117] "RemoveContainer" containerID="a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.660941 4970 scope.go:117] "RemoveContainer" containerID="e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee" Dec 09 13:02:44 crc kubenswrapper[4970]: E1209 13:02:44.663783 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee\": container with ID starting with e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee not found: ID does not exist" containerID="e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.663837 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee"} err="failed to get container status \"e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee\": rpc error: code = NotFound desc = could not find container \"e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee\": container with ID starting with e1551214360f56374fdc61d80860f37091c0796ddb8309cd0d2c06e52c21fbee not found: ID does not exist" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.663870 4970 scope.go:117] "RemoveContainer" containerID="40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da" Dec 09 13:02:44 crc kubenswrapper[4970]: E1209 13:02:44.664391 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da\": container with ID starting with 40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da not found: ID does not exist" containerID="40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.664463 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da"} err="failed to get container status \"40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da\": rpc error: code = NotFound desc = could not find container \"40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da\": container with ID starting with 40b332d673bab1a39639afc68d8f39b2b481635614186f2e85673d2d996640da not found: ID does not exist" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.664541 4970 scope.go:117] "RemoveContainer" containerID="a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5" Dec 09 13:02:44 crc kubenswrapper[4970]: E1209 13:02:44.664919 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5\": container with ID starting with a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5 not found: ID does not exist" containerID="a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5" Dec 09 13:02:44 crc kubenswrapper[4970]: I1209 13:02:44.664966 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5"} err="failed to get container status \"a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5\": rpc error: code = NotFound desc = could not find container \"a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5\": container with ID starting with a334c387086b8f5a8471baad11384ae371c36d5e802a39a7f72aee2ba63953a5 not found: ID does not exist" Dec 09 13:02:45 crc kubenswrapper[4970]: I1209 13:02:45.832023 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" path="/var/lib/kubelet/pods/8bb92a54-3a68-41f0-9774-1e36686151c3/volumes" Dec 09 13:02:46 crc kubenswrapper[4970]: I1209 13:02:46.010528 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:02:46 crc kubenswrapper[4970]: I1209 13:02:46.010600 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:02:47 crc kubenswrapper[4970]: E1209 13:02:47.831499 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:02:52 crc kubenswrapper[4970]: E1209 13:02:52.816637 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:03:02 crc kubenswrapper[4970]: E1209 13:03:02.815346 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:03:04 crc kubenswrapper[4970]: E1209 13:03:04.814308 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:03:15 crc kubenswrapper[4970]: E1209 13:03:15.816848 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:03:15 crc kubenswrapper[4970]: E1209 13:03:15.817588 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:03:16 crc kubenswrapper[4970]: I1209 13:03:16.011271 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:03:16 crc kubenswrapper[4970]: I1209 13:03:16.011333 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:03:26 crc kubenswrapper[4970]: E1209 13:03:26.815579 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:03:29 crc kubenswrapper[4970]: I1209 13:03:29.816418 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:03:29 crc kubenswrapper[4970]: E1209 13:03:29.914777 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:03:29 crc kubenswrapper[4970]: E1209 13:03:29.914863 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:03:29 crc kubenswrapper[4970]: E1209 13:03:29.915042 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:03:29 crc kubenswrapper[4970]: E1209 13:03:29.916412 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.277899 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gzzjd"] Dec 09 13:03:37 crc kubenswrapper[4970]: E1209 13:03:37.279103 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="registry-server" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.279121 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="registry-server" Dec 09 13:03:37 crc kubenswrapper[4970]: E1209 13:03:37.279146 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="extract-utilities" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.279154 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="extract-utilities" Dec 09 13:03:37 crc kubenswrapper[4970]: E1209 13:03:37.279206 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="extract-content" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.279215 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="extract-content" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.279507 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb92a54-3a68-41f0-9774-1e36686151c3" containerName="registry-server" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.281560 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.287794 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzzjd"] Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.447621 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-catalog-content\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.447764 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-utilities\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.447910 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htdq9\" (UniqueName: \"kubernetes.io/projected/585aad77-6aa5-456f-922a-4d8efc113784-kube-api-access-htdq9\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.549727 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htdq9\" (UniqueName: \"kubernetes.io/projected/585aad77-6aa5-456f-922a-4d8efc113784-kube-api-access-htdq9\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.549874 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-catalog-content\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.550301 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-catalog-content\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.550368 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-utilities\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.550590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-utilities\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.567724 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htdq9\" (UniqueName: \"kubernetes.io/projected/585aad77-6aa5-456f-922a-4d8efc113784-kube-api-access-htdq9\") pod \"certified-operators-gzzjd\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:37 crc kubenswrapper[4970]: I1209 13:03:37.601404 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:38 crc kubenswrapper[4970]: W1209 13:03:38.149597 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod585aad77_6aa5_456f_922a_4d8efc113784.slice/crio-b66e0a8548ec473d60cce9c4e8a0db3469385da5c5de70b6337c41a4308e3009 WatchSource:0}: Error finding container b66e0a8548ec473d60cce9c4e8a0db3469385da5c5de70b6337c41a4308e3009: Status 404 returned error can't find the container with id b66e0a8548ec473d60cce9c4e8a0db3469385da5c5de70b6337c41a4308e3009 Dec 09 13:03:38 crc kubenswrapper[4970]: I1209 13:03:38.162712 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gzzjd"] Dec 09 13:03:39 crc kubenswrapper[4970]: I1209 13:03:39.168651 4970 generic.go:334] "Generic (PLEG): container finished" podID="585aad77-6aa5-456f-922a-4d8efc113784" containerID="1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef" exitCode=0 Dec 09 13:03:39 crc kubenswrapper[4970]: I1209 13:03:39.168861 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzzjd" event={"ID":"585aad77-6aa5-456f-922a-4d8efc113784","Type":"ContainerDied","Data":"1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef"} Dec 09 13:03:39 crc kubenswrapper[4970]: I1209 13:03:39.168885 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzzjd" event={"ID":"585aad77-6aa5-456f-922a-4d8efc113784","Type":"ContainerStarted","Data":"b66e0a8548ec473d60cce9c4e8a0db3469385da5c5de70b6337c41a4308e3009"} Dec 09 13:03:40 crc kubenswrapper[4970]: E1209 13:03:40.814663 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:03:41 crc kubenswrapper[4970]: I1209 13:03:41.198980 4970 generic.go:334] "Generic (PLEG): container finished" podID="585aad77-6aa5-456f-922a-4d8efc113784" containerID="97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad" exitCode=0 Dec 09 13:03:41 crc kubenswrapper[4970]: I1209 13:03:41.199029 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzzjd" event={"ID":"585aad77-6aa5-456f-922a-4d8efc113784","Type":"ContainerDied","Data":"97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad"} Dec 09 13:03:42 crc kubenswrapper[4970]: I1209 13:03:42.210417 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzzjd" event={"ID":"585aad77-6aa5-456f-922a-4d8efc113784","Type":"ContainerStarted","Data":"1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15"} Dec 09 13:03:42 crc kubenswrapper[4970]: I1209 13:03:42.229856 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gzzjd" podStartSLOduration=2.797378529 podStartE2EDuration="5.22984021s" podCreationTimestamp="2025-12-09 13:03:37 +0000 UTC" firstStartedPulling="2025-12-09 13:03:39.170634334 +0000 UTC m=+3431.731115385" lastFinishedPulling="2025-12-09 13:03:41.603096005 +0000 UTC m=+3434.163577066" observedRunningTime="2025-12-09 13:03:42.227926708 +0000 UTC m=+3434.788407749" watchObservedRunningTime="2025-12-09 13:03:42.22984021 +0000 UTC m=+3434.790321261" Dec 09 13:03:44 crc kubenswrapper[4970]: E1209 13:03:44.816790 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:03:46 crc kubenswrapper[4970]: I1209 13:03:46.010801 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:03:46 crc kubenswrapper[4970]: I1209 13:03:46.011190 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:03:46 crc kubenswrapper[4970]: I1209 13:03:46.011269 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:03:46 crc kubenswrapper[4970]: I1209 13:03:46.012228 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d708e8d693e60cdc3ddfc0e7ba17ffc03a57e486fe1510c7c0c539f931a60d30"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:03:46 crc kubenswrapper[4970]: I1209 13:03:46.012384 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://d708e8d693e60cdc3ddfc0e7ba17ffc03a57e486fe1510c7c0c539f931a60d30" gracePeriod=600 Dec 09 13:03:47 crc kubenswrapper[4970]: I1209 13:03:47.271539 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="d708e8d693e60cdc3ddfc0e7ba17ffc03a57e486fe1510c7c0c539f931a60d30" exitCode=0 Dec 09 13:03:47 crc kubenswrapper[4970]: I1209 13:03:47.271623 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"d708e8d693e60cdc3ddfc0e7ba17ffc03a57e486fe1510c7c0c539f931a60d30"} Dec 09 13:03:47 crc kubenswrapper[4970]: I1209 13:03:47.271853 4970 scope.go:117] "RemoveContainer" containerID="0d8c59938602bf49f66f344a5d3fa9b4774e3c875b0d7138e6411bc2c4716f74" Dec 09 13:03:47 crc kubenswrapper[4970]: I1209 13:03:47.602591 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:47 crc kubenswrapper[4970]: I1209 13:03:47.602637 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:47 crc kubenswrapper[4970]: I1209 13:03:47.671655 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:48 crc kubenswrapper[4970]: I1209 13:03:48.287742 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139"} Dec 09 13:03:48 crc kubenswrapper[4970]: I1209 13:03:48.349634 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:51 crc kubenswrapper[4970]: I1209 13:03:51.241747 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzzjd"] Dec 09 13:03:51 crc kubenswrapper[4970]: I1209 13:03:51.242528 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gzzjd" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="registry-server" containerID="cri-o://1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15" gracePeriod=2 Dec 09 13:03:51 crc kubenswrapper[4970]: E1209 13:03:51.815182 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.295225 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.343776 4970 generic.go:334] "Generic (PLEG): container finished" podID="585aad77-6aa5-456f-922a-4d8efc113784" containerID="1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15" exitCode=0 Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.343921 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gzzjd" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.343903 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzzjd" event={"ID":"585aad77-6aa5-456f-922a-4d8efc113784","Type":"ContainerDied","Data":"1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15"} Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.344604 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gzzjd" event={"ID":"585aad77-6aa5-456f-922a-4d8efc113784","Type":"ContainerDied","Data":"b66e0a8548ec473d60cce9c4e8a0db3469385da5c5de70b6337c41a4308e3009"} Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.344650 4970 scope.go:117] "RemoveContainer" containerID="1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.368871 4970 scope.go:117] "RemoveContainer" containerID="97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.419814 4970 scope.go:117] "RemoveContainer" containerID="1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.454740 4970 scope.go:117] "RemoveContainer" containerID="1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15" Dec 09 13:03:52 crc kubenswrapper[4970]: E1209 13:03:52.455105 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15\": container with ID starting with 1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15 not found: ID does not exist" containerID="1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.455142 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15"} err="failed to get container status \"1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15\": rpc error: code = NotFound desc = could not find container \"1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15\": container with ID starting with 1194d2c458eca6368ac375fbddb2835cba707530ee784d008a64a2559e16cd15 not found: ID does not exist" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.455161 4970 scope.go:117] "RemoveContainer" containerID="97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.456041 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htdq9\" (UniqueName: \"kubernetes.io/projected/585aad77-6aa5-456f-922a-4d8efc113784-kube-api-access-htdq9\") pod \"585aad77-6aa5-456f-922a-4d8efc113784\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.456239 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-utilities\") pod \"585aad77-6aa5-456f-922a-4d8efc113784\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.456358 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-catalog-content\") pod \"585aad77-6aa5-456f-922a-4d8efc113784\" (UID: \"585aad77-6aa5-456f-922a-4d8efc113784\") " Dec 09 13:03:52 crc kubenswrapper[4970]: E1209 13:03:52.456908 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad\": container with ID starting with 97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad not found: ID does not exist" containerID="97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.456932 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad"} err="failed to get container status \"97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad\": rpc error: code = NotFound desc = could not find container \"97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad\": container with ID starting with 97989d4992e3faddfb1885792eb368321c760c5d741101c7b21ad1a1211deaad not found: ID does not exist" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.456947 4970 scope.go:117] "RemoveContainer" containerID="1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef" Dec 09 13:03:52 crc kubenswrapper[4970]: E1209 13:03:52.457205 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef\": container with ID starting with 1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef not found: ID does not exist" containerID="1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.457221 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef"} err="failed to get container status \"1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef\": rpc error: code = NotFound desc = could not find container \"1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef\": container with ID starting with 1422c52f9cb9c22250fd4e28d12b998174a76a7040dfc5e29362b9157f5721ef not found: ID does not exist" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.457289 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-utilities" (OuterVolumeSpecName: "utilities") pod "585aad77-6aa5-456f-922a-4d8efc113784" (UID: "585aad77-6aa5-456f-922a-4d8efc113784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.466982 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585aad77-6aa5-456f-922a-4d8efc113784-kube-api-access-htdq9" (OuterVolumeSpecName: "kube-api-access-htdq9") pod "585aad77-6aa5-456f-922a-4d8efc113784" (UID: "585aad77-6aa5-456f-922a-4d8efc113784"). InnerVolumeSpecName "kube-api-access-htdq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.504606 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "585aad77-6aa5-456f-922a-4d8efc113784" (UID: "585aad77-6aa5-456f-922a-4d8efc113784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.558661 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.558713 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/585aad77-6aa5-456f-922a-4d8efc113784-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.558730 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htdq9\" (UniqueName: \"kubernetes.io/projected/585aad77-6aa5-456f-922a-4d8efc113784-kube-api-access-htdq9\") on node \"crc\" DevicePath \"\"" Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.700709 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gzzjd"] Dec 09 13:03:52 crc kubenswrapper[4970]: I1209 13:03:52.718168 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gzzjd"] Dec 09 13:03:53 crc kubenswrapper[4970]: I1209 13:03:53.835186 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585aad77-6aa5-456f-922a-4d8efc113784" path="/var/lib/kubelet/pods/585aad77-6aa5-456f-922a-4d8efc113784/volumes" Dec 09 13:03:56 crc kubenswrapper[4970]: E1209 13:03:56.815118 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:04:05 crc kubenswrapper[4970]: E1209 13:04:05.944949 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:04:05 crc kubenswrapper[4970]: E1209 13:04:05.945526 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:04:05 crc kubenswrapper[4970]: E1209 13:04:05.945685 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:04:05 crc kubenswrapper[4970]: E1209 13:04:05.946898 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:04:10 crc kubenswrapper[4970]: E1209 13:04:10.815148 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:04:19 crc kubenswrapper[4970]: E1209 13:04:19.815302 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:04:22 crc kubenswrapper[4970]: E1209 13:04:22.817390 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:04:34 crc kubenswrapper[4970]: E1209 13:04:34.816276 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:04:36 crc kubenswrapper[4970]: E1209 13:04:36.815046 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:04:46 crc kubenswrapper[4970]: E1209 13:04:46.816912 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:04:48 crc kubenswrapper[4970]: E1209 13:04:48.817295 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:04:54 crc kubenswrapper[4970]: I1209 13:04:54.156998 4970 generic.go:334] "Generic (PLEG): container finished" podID="dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" containerID="5721d7399f1f75a5a6a6aa2415889833494565f7e344887e272916788929705e" exitCode=2 Dec 09 13:04:54 crc kubenswrapper[4970]: I1209 13:04:54.157110 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" event={"ID":"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c","Type":"ContainerDied","Data":"5721d7399f1f75a5a6a6aa2415889833494565f7e344887e272916788929705e"} Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.601295 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.681111 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-inventory\") pod \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.681442 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-ssh-key\") pod \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.681598 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjbzj\" (UniqueName: \"kubernetes.io/projected/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-kube-api-access-xjbzj\") pod \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\" (UID: \"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c\") " Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.704907 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-kube-api-access-xjbzj" (OuterVolumeSpecName: "kube-api-access-xjbzj") pod "dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" (UID: "dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c"). InnerVolumeSpecName "kube-api-access-xjbzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.726460 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-inventory" (OuterVolumeSpecName: "inventory") pod "dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" (UID: "dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.739666 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" (UID: "dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.784851 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.784901 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:04:55 crc kubenswrapper[4970]: I1209 13:04:55.784911 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjbzj\" (UniqueName: \"kubernetes.io/projected/dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c-kube-api-access-xjbzj\") on node \"crc\" DevicePath \"\"" Dec 09 13:04:56 crc kubenswrapper[4970]: I1209 13:04:56.176497 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" event={"ID":"dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c","Type":"ContainerDied","Data":"783ea4ae17be458360a58e8350489c95c592ff28635ce1d3e4ba713323e834a0"} Dec 09 13:04:56 crc kubenswrapper[4970]: I1209 13:04:56.176540 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783ea4ae17be458360a58e8350489c95c592ff28635ce1d3e4ba713323e834a0" Dec 09 13:04:56 crc kubenswrapper[4970]: I1209 13:04:56.176572 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7" Dec 09 13:04:59 crc kubenswrapper[4970]: E1209 13:04:59.815266 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:05:01 crc kubenswrapper[4970]: E1209 13:05:01.816924 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:05:11 crc kubenswrapper[4970]: E1209 13:05:11.815334 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:05:14 crc kubenswrapper[4970]: E1209 13:05:14.816222 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:05:25 crc kubenswrapper[4970]: E1209 13:05:25.815262 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:05:26 crc kubenswrapper[4970]: E1209 13:05:26.815801 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:05:37 crc kubenswrapper[4970]: E1209 13:05:37.833466 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:05:37 crc kubenswrapper[4970]: E1209 13:05:37.833466 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.175103 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lrjrp"] Dec 09 13:05:39 crc kubenswrapper[4970]: E1209 13:05:39.176048 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="extract-content" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.176075 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="extract-content" Dec 09 13:05:39 crc kubenswrapper[4970]: E1209 13:05:39.176143 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="extract-utilities" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.176155 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="extract-utilities" Dec 09 13:05:39 crc kubenswrapper[4970]: E1209 13:05:39.176173 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="registry-server" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.176185 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="registry-server" Dec 09 13:05:39 crc kubenswrapper[4970]: E1209 13:05:39.176258 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.176272 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.176624 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="585aad77-6aa5-456f-922a-4d8efc113784" containerName="registry-server" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.176660 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.178977 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.188077 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrjrp"] Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.756751 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-utilities\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.757390 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-catalog-content\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.757528 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk9pc\" (UniqueName: \"kubernetes.io/projected/628a3321-bbd7-4795-91cd-dab77abafc6f-kube-api-access-hk9pc\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.859062 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-catalog-content\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.859178 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk9pc\" (UniqueName: \"kubernetes.io/projected/628a3321-bbd7-4795-91cd-dab77abafc6f-kube-api-access-hk9pc\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.859618 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-catalog-content\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.860136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-utilities\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.860862 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-utilities\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:39 crc kubenswrapper[4970]: I1209 13:05:39.883044 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk9pc\" (UniqueName: \"kubernetes.io/projected/628a3321-bbd7-4795-91cd-dab77abafc6f-kube-api-access-hk9pc\") pod \"community-operators-lrjrp\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:40 crc kubenswrapper[4970]: I1209 13:05:40.103711 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:40 crc kubenswrapper[4970]: I1209 13:05:40.692556 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lrjrp"] Dec 09 13:05:40 crc kubenswrapper[4970]: I1209 13:05:40.858958 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerStarted","Data":"8659a63747179282a95622a67a7f2ce896721962a2f996876f41b1b3b7985881"} Dec 09 13:05:41 crc kubenswrapper[4970]: I1209 13:05:41.872904 4970 generic.go:334] "Generic (PLEG): container finished" podID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerID="802ef965f40c60cb72a098a4e8b99a793c5057ca2521f92726bb9377992513af" exitCode=0 Dec 09 13:05:41 crc kubenswrapper[4970]: I1209 13:05:41.872991 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerDied","Data":"802ef965f40c60cb72a098a4e8b99a793c5057ca2521f92726bb9377992513af"} Dec 09 13:05:44 crc kubenswrapper[4970]: I1209 13:05:44.912889 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerStarted","Data":"d540838dd11e19d37303a0ee2ce626990c4247bce2ada10649e0348236caac29"} Dec 09 13:05:45 crc kubenswrapper[4970]: I1209 13:05:45.935538 4970 generic.go:334] "Generic (PLEG): container finished" podID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerID="d540838dd11e19d37303a0ee2ce626990c4247bce2ada10649e0348236caac29" exitCode=0 Dec 09 13:05:45 crc kubenswrapper[4970]: I1209 13:05:45.935688 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerDied","Data":"d540838dd11e19d37303a0ee2ce626990c4247bce2ada10649e0348236caac29"} Dec 09 13:05:46 crc kubenswrapper[4970]: I1209 13:05:46.953991 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerStarted","Data":"10e1d04c4b1aac117144b3eb501607698b23cde340256da59be11c0d0270f83b"} Dec 09 13:05:46 crc kubenswrapper[4970]: I1209 13:05:46.999185 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lrjrp" podStartSLOduration=3.521753195 podStartE2EDuration="7.99915481s" podCreationTimestamp="2025-12-09 13:05:39 +0000 UTC" firstStartedPulling="2025-12-09 13:05:41.874919891 +0000 UTC m=+3554.435400932" lastFinishedPulling="2025-12-09 13:05:46.352321486 +0000 UTC m=+3558.912802547" observedRunningTime="2025-12-09 13:05:46.980133746 +0000 UTC m=+3559.540614837" watchObservedRunningTime="2025-12-09 13:05:46.99915481 +0000 UTC m=+3559.559635901" Dec 09 13:05:50 crc kubenswrapper[4970]: I1209 13:05:50.104708 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:50 crc kubenswrapper[4970]: I1209 13:05:50.105204 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:50 crc kubenswrapper[4970]: I1209 13:05:50.175238 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:51 crc kubenswrapper[4970]: I1209 13:05:51.071328 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:51 crc kubenswrapper[4970]: I1209 13:05:51.135804 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrjrp"] Dec 09 13:05:51 crc kubenswrapper[4970]: E1209 13:05:51.818022 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:05:52 crc kubenswrapper[4970]: E1209 13:05:52.814998 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:05:53 crc kubenswrapper[4970]: I1209 13:05:53.030864 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lrjrp" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="registry-server" containerID="cri-o://10e1d04c4b1aac117144b3eb501607698b23cde340256da59be11c0d0270f83b" gracePeriod=2 Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.046985 4970 generic.go:334] "Generic (PLEG): container finished" podID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerID="10e1d04c4b1aac117144b3eb501607698b23cde340256da59be11c0d0270f83b" exitCode=0 Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.047296 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerDied","Data":"10e1d04c4b1aac117144b3eb501607698b23cde340256da59be11c0d0270f83b"} Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.047324 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lrjrp" event={"ID":"628a3321-bbd7-4795-91cd-dab77abafc6f","Type":"ContainerDied","Data":"8659a63747179282a95622a67a7f2ce896721962a2f996876f41b1b3b7985881"} Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.047353 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8659a63747179282a95622a67a7f2ce896721962a2f996876f41b1b3b7985881" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.116578 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.221795 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk9pc\" (UniqueName: \"kubernetes.io/projected/628a3321-bbd7-4795-91cd-dab77abafc6f-kube-api-access-hk9pc\") pod \"628a3321-bbd7-4795-91cd-dab77abafc6f\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.222312 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-catalog-content\") pod \"628a3321-bbd7-4795-91cd-dab77abafc6f\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.222409 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-utilities\") pod \"628a3321-bbd7-4795-91cd-dab77abafc6f\" (UID: \"628a3321-bbd7-4795-91cd-dab77abafc6f\") " Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.223061 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-utilities" (OuterVolumeSpecName: "utilities") pod "628a3321-bbd7-4795-91cd-dab77abafc6f" (UID: "628a3321-bbd7-4795-91cd-dab77abafc6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.223183 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.227524 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628a3321-bbd7-4795-91cd-dab77abafc6f-kube-api-access-hk9pc" (OuterVolumeSpecName: "kube-api-access-hk9pc") pod "628a3321-bbd7-4795-91cd-dab77abafc6f" (UID: "628a3321-bbd7-4795-91cd-dab77abafc6f"). InnerVolumeSpecName "kube-api-access-hk9pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.286332 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "628a3321-bbd7-4795-91cd-dab77abafc6f" (UID: "628a3321-bbd7-4795-91cd-dab77abafc6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.325616 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk9pc\" (UniqueName: \"kubernetes.io/projected/628a3321-bbd7-4795-91cd-dab77abafc6f-kube-api-access-hk9pc\") on node \"crc\" DevicePath \"\"" Dec 09 13:05:54 crc kubenswrapper[4970]: I1209 13:05:54.325647 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628a3321-bbd7-4795-91cd-dab77abafc6f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:05:55 crc kubenswrapper[4970]: I1209 13:05:55.056798 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lrjrp" Dec 09 13:05:55 crc kubenswrapper[4970]: I1209 13:05:55.110935 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lrjrp"] Dec 09 13:05:55 crc kubenswrapper[4970]: I1209 13:05:55.124215 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lrjrp"] Dec 09 13:05:55 crc kubenswrapper[4970]: I1209 13:05:55.828185 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" path="/var/lib/kubelet/pods/628a3321-bbd7-4795-91cd-dab77abafc6f/volumes" Dec 09 13:06:05 crc kubenswrapper[4970]: E1209 13:06:05.816768 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:06:05 crc kubenswrapper[4970]: E1209 13:06:05.822723 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.047197 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk"] Dec 09 13:06:13 crc kubenswrapper[4970]: E1209 13:06:13.048368 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="registry-server" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.048383 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="registry-server" Dec 09 13:06:13 crc kubenswrapper[4970]: E1209 13:06:13.048415 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="extract-utilities" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.048421 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="extract-utilities" Dec 09 13:06:13 crc kubenswrapper[4970]: E1209 13:06:13.048428 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="extract-content" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.048433 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="extract-content" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.048658 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="628a3321-bbd7-4795-91cd-dab77abafc6f" containerName="registry-server" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.049588 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.053362 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.053779 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.054021 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.054350 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.066438 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk"] Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.172182 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.172266 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.172963 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj789\" (UniqueName: \"kubernetes.io/projected/711dabf9-95cd-423b-ad4b-2273f430d8f2-kube-api-access-lj789\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.275535 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.275607 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.275647 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj789\" (UniqueName: \"kubernetes.io/projected/711dabf9-95cd-423b-ad4b-2273f430d8f2-kube-api-access-lj789\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.282932 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.283930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.309580 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj789\" (UniqueName: \"kubernetes.io/projected/711dabf9-95cd-423b-ad4b-2273f430d8f2-kube-api-access-lj789\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:13 crc kubenswrapper[4970]: I1209 13:06:13.377131 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:06:14 crc kubenswrapper[4970]: I1209 13:06:14.017566 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk"] Dec 09 13:06:14 crc kubenswrapper[4970]: I1209 13:06:14.298874 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" event={"ID":"711dabf9-95cd-423b-ad4b-2273f430d8f2","Type":"ContainerStarted","Data":"3607a72ff925d68d60f70b0b27e5c582996782af63d7a50cf27746b26bf930a8"} Dec 09 13:06:15 crc kubenswrapper[4970]: I1209 13:06:15.310998 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" event={"ID":"711dabf9-95cd-423b-ad4b-2273f430d8f2","Type":"ContainerStarted","Data":"f3f8b83a3958b133264d0d38a9739482c822e451943db4366481ef10e40c5649"} Dec 09 13:06:15 crc kubenswrapper[4970]: I1209 13:06:15.339268 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" podStartSLOduration=1.8810873510000001 podStartE2EDuration="2.339219022s" podCreationTimestamp="2025-12-09 13:06:13 +0000 UTC" firstStartedPulling="2025-12-09 13:06:14.021089822 +0000 UTC m=+3586.581570873" lastFinishedPulling="2025-12-09 13:06:14.479221483 +0000 UTC m=+3587.039702544" observedRunningTime="2025-12-09 13:06:15.333173998 +0000 UTC m=+3587.893655059" watchObservedRunningTime="2025-12-09 13:06:15.339219022 +0000 UTC m=+3587.899700103" Dec 09 13:06:16 crc kubenswrapper[4970]: I1209 13:06:16.010614 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:06:16 crc kubenswrapper[4970]: I1209 13:06:16.010701 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:06:16 crc kubenswrapper[4970]: E1209 13:06:16.818739 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:06:18 crc kubenswrapper[4970]: E1209 13:06:18.815278 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:06:29 crc kubenswrapper[4970]: E1209 13:06:29.819077 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:06:31 crc kubenswrapper[4970]: E1209 13:06:31.814870 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:06:43 crc kubenswrapper[4970]: E1209 13:06:43.819226 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:06:43 crc kubenswrapper[4970]: E1209 13:06:43.819790 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:06:46 crc kubenswrapper[4970]: I1209 13:06:46.011300 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:06:46 crc kubenswrapper[4970]: I1209 13:06:46.011878 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:06:56 crc kubenswrapper[4970]: E1209 13:06:56.815076 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:06:58 crc kubenswrapper[4970]: E1209 13:06:58.815697 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:07:10 crc kubenswrapper[4970]: E1209 13:07:10.815764 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:07:11 crc kubenswrapper[4970]: E1209 13:07:11.815372 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.010984 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.011558 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.011610 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.012798 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.012881 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" gracePeriod=600 Dec 09 13:07:16 crc kubenswrapper[4970]: E1209 13:07:16.137546 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.174615 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" exitCode=0 Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.174665 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139"} Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.174703 4970 scope.go:117] "RemoveContainer" containerID="d708e8d693e60cdc3ddfc0e7ba17ffc03a57e486fe1510c7c0c539f931a60d30" Dec 09 13:07:16 crc kubenswrapper[4970]: I1209 13:07:16.175985 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:07:16 crc kubenswrapper[4970]: E1209 13:07:16.176714 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:07:22 crc kubenswrapper[4970]: E1209 13:07:22.817706 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:07:25 crc kubenswrapper[4970]: E1209 13:07:25.816200 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:07:27 crc kubenswrapper[4970]: I1209 13:07:27.813983 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:07:27 crc kubenswrapper[4970]: E1209 13:07:27.814997 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:07:36 crc kubenswrapper[4970]: E1209 13:07:36.817470 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:07:38 crc kubenswrapper[4970]: E1209 13:07:38.818979 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:07:40 crc kubenswrapper[4970]: I1209 13:07:40.813977 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:07:40 crc kubenswrapper[4970]: E1209 13:07:40.815615 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:07:48 crc kubenswrapper[4970]: E1209 13:07:48.816667 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:07:49 crc kubenswrapper[4970]: E1209 13:07:49.816224 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:07:55 crc kubenswrapper[4970]: I1209 13:07:55.815727 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:07:55 crc kubenswrapper[4970]: E1209 13:07:55.816889 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:07:59 crc kubenswrapper[4970]: E1209 13:07:59.815755 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:08:04 crc kubenswrapper[4970]: E1209 13:08:04.816374 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:08:10 crc kubenswrapper[4970]: I1209 13:08:10.813820 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:08:10 crc kubenswrapper[4970]: E1209 13:08:10.815001 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:08:14 crc kubenswrapper[4970]: E1209 13:08:14.816516 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:08:18 crc kubenswrapper[4970]: E1209 13:08:18.814879 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:08:24 crc kubenswrapper[4970]: I1209 13:08:24.813206 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:08:24 crc kubenswrapper[4970]: E1209 13:08:24.814295 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:08:26 crc kubenswrapper[4970]: E1209 13:08:26.814323 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:08:32 crc kubenswrapper[4970]: E1209 13:08:32.815709 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:08:37 crc kubenswrapper[4970]: I1209 13:08:37.821837 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:08:37 crc kubenswrapper[4970]: E1209 13:08:37.822739 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:08:39 crc kubenswrapper[4970]: I1209 13:08:39.815895 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:08:39 crc kubenswrapper[4970]: E1209 13:08:39.980901 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:08:39 crc kubenswrapper[4970]: E1209 13:08:39.981199 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:08:39 crc kubenswrapper[4970]: E1209 13:08:39.981321 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:08:39 crc kubenswrapper[4970]: E1209 13:08:39.982391 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:08:47 crc kubenswrapper[4970]: E1209 13:08:47.829358 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:08:51 crc kubenswrapper[4970]: I1209 13:08:51.814219 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:08:51 crc kubenswrapper[4970]: E1209 13:08:51.815721 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:08:52 crc kubenswrapper[4970]: E1209 13:08:52.821008 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:08:58 crc kubenswrapper[4970]: E1209 13:08:58.820178 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:09:04 crc kubenswrapper[4970]: I1209 13:09:04.813293 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:09:04 crc kubenswrapper[4970]: E1209 13:09:04.814275 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:09:05 crc kubenswrapper[4970]: E1209 13:09:05.818586 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:09:11 crc kubenswrapper[4970]: E1209 13:09:11.941736 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:09:11 crc kubenswrapper[4970]: E1209 13:09:11.942145 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:09:11 crc kubenswrapper[4970]: E1209 13:09:11.942304 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:09:11 crc kubenswrapper[4970]: E1209 13:09:11.943831 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.464534 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2js"] Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.469797 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.488773 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2js"] Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.565183 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zm4z\" (UniqueName: \"kubernetes.io/projected/69343a58-2578-40b1-af0f-55a5b4cb4049-kube-api-access-2zm4z\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.565261 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-catalog-content\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.565454 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-utilities\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.667942 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zm4z\" (UniqueName: \"kubernetes.io/projected/69343a58-2578-40b1-af0f-55a5b4cb4049-kube-api-access-2zm4z\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.668046 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-catalog-content\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.668115 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-utilities\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.668614 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-catalog-content\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.668747 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-utilities\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.711145 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zm4z\" (UniqueName: \"kubernetes.io/projected/69343a58-2578-40b1-af0f-55a5b4cb4049-kube-api-access-2zm4z\") pod \"redhat-marketplace-wt2js\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:13 crc kubenswrapper[4970]: I1209 13:09:13.845610 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:14 crc kubenswrapper[4970]: I1209 13:09:14.416759 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2js"] Dec 09 13:09:14 crc kubenswrapper[4970]: I1209 13:09:14.661688 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerStarted","Data":"1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf"} Dec 09 13:09:14 crc kubenswrapper[4970]: I1209 13:09:14.661737 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerStarted","Data":"1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8"} Dec 09 13:09:15 crc kubenswrapper[4970]: I1209 13:09:15.680850 4970 generic.go:334] "Generic (PLEG): container finished" podID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerID="1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf" exitCode=0 Dec 09 13:09:15 crc kubenswrapper[4970]: I1209 13:09:15.680918 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerDied","Data":"1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf"} Dec 09 13:09:17 crc kubenswrapper[4970]: I1209 13:09:17.712858 4970 generic.go:334] "Generic (PLEG): container finished" podID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerID="ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054" exitCode=0 Dec 09 13:09:17 crc kubenswrapper[4970]: I1209 13:09:17.712929 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerDied","Data":"ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054"} Dec 09 13:09:17 crc kubenswrapper[4970]: I1209 13:09:17.826771 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:09:17 crc kubenswrapper[4970]: E1209 13:09:17.827323 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:09:18 crc kubenswrapper[4970]: I1209 13:09:18.726806 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerStarted","Data":"513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175"} Dec 09 13:09:18 crc kubenswrapper[4970]: I1209 13:09:18.743705 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wt2js" podStartSLOduration=3.300814663 podStartE2EDuration="5.743678552s" podCreationTimestamp="2025-12-09 13:09:13 +0000 UTC" firstStartedPulling="2025-12-09 13:09:15.688238235 +0000 UTC m=+3768.248719316" lastFinishedPulling="2025-12-09 13:09:18.131102134 +0000 UTC m=+3770.691583205" observedRunningTime="2025-12-09 13:09:18.742724706 +0000 UTC m=+3771.303205767" watchObservedRunningTime="2025-12-09 13:09:18.743678552 +0000 UTC m=+3771.304159613" Dec 09 13:09:20 crc kubenswrapper[4970]: E1209 13:09:20.813844 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:09:23 crc kubenswrapper[4970]: I1209 13:09:23.847126 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:23 crc kubenswrapper[4970]: I1209 13:09:23.847751 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:23 crc kubenswrapper[4970]: I1209 13:09:23.948183 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:24 crc kubenswrapper[4970]: E1209 13:09:24.821434 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:09:24 crc kubenswrapper[4970]: I1209 13:09:24.882508 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:24 crc kubenswrapper[4970]: I1209 13:09:24.938184 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2js"] Dec 09 13:09:26 crc kubenswrapper[4970]: I1209 13:09:26.816783 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wt2js" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="registry-server" containerID="cri-o://513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175" gracePeriod=2 Dec 09 13:09:27 crc kubenswrapper[4970]: E1209 13:09:27.039755 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-conmon-513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175.scope\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.356848 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.495260 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zm4z\" (UniqueName: \"kubernetes.io/projected/69343a58-2578-40b1-af0f-55a5b4cb4049-kube-api-access-2zm4z\") pod \"69343a58-2578-40b1-af0f-55a5b4cb4049\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.495358 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-catalog-content\") pod \"69343a58-2578-40b1-af0f-55a5b4cb4049\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.495822 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-utilities\") pod \"69343a58-2578-40b1-af0f-55a5b4cb4049\" (UID: \"69343a58-2578-40b1-af0f-55a5b4cb4049\") " Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.496824 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-utilities" (OuterVolumeSpecName: "utilities") pod "69343a58-2578-40b1-af0f-55a5b4cb4049" (UID: "69343a58-2578-40b1-af0f-55a5b4cb4049"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.503475 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69343a58-2578-40b1-af0f-55a5b4cb4049-kube-api-access-2zm4z" (OuterVolumeSpecName: "kube-api-access-2zm4z") pod "69343a58-2578-40b1-af0f-55a5b4cb4049" (UID: "69343a58-2578-40b1-af0f-55a5b4cb4049"). InnerVolumeSpecName "kube-api-access-2zm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.514904 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69343a58-2578-40b1-af0f-55a5b4cb4049" (UID: "69343a58-2578-40b1-af0f-55a5b4cb4049"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.598905 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.598937 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zm4z\" (UniqueName: \"kubernetes.io/projected/69343a58-2578-40b1-af0f-55a5b4cb4049-kube-api-access-2zm4z\") on node \"crc\" DevicePath \"\"" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.598946 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69343a58-2578-40b1-af0f-55a5b4cb4049-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.832712 4970 generic.go:334] "Generic (PLEG): container finished" podID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerID="513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175" exitCode=0 Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.832765 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerDied","Data":"513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175"} Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.833054 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wt2js" event={"ID":"69343a58-2578-40b1-af0f-55a5b4cb4049","Type":"ContainerDied","Data":"1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8"} Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.833076 4970 scope.go:117] "RemoveContainer" containerID="513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.832778 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wt2js" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.865468 4970 scope.go:117] "RemoveContainer" containerID="ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.883689 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2js"] Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.901241 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wt2js"] Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.905471 4970 scope.go:117] "RemoveContainer" containerID="1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.963712 4970 scope.go:117] "RemoveContainer" containerID="513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175" Dec 09 13:09:27 crc kubenswrapper[4970]: E1209 13:09:27.964087 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175\": container with ID starting with 513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175 not found: ID does not exist" containerID="513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.964134 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175"} err="failed to get container status \"513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175\": rpc error: code = NotFound desc = could not find container \"513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175\": container with ID starting with 513c9c3d0f07271e225159fe7ff1c8dfc5a8d1abae811769d3ca4d1a620bc175 not found: ID does not exist" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.964157 4970 scope.go:117] "RemoveContainer" containerID="ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054" Dec 09 13:09:27 crc kubenswrapper[4970]: E1209 13:09:27.964768 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054\": container with ID starting with ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054 not found: ID does not exist" containerID="ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.964803 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054"} err="failed to get container status \"ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054\": rpc error: code = NotFound desc = could not find container \"ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054\": container with ID starting with ff858b73160c474f2173103722ee607362b3e4245bcafaffa3ade5216f121054 not found: ID does not exist" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.964823 4970 scope.go:117] "RemoveContainer" containerID="1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf" Dec 09 13:09:27 crc kubenswrapper[4970]: E1209 13:09:27.965066 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf\": container with ID starting with 1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf not found: ID does not exist" containerID="1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf" Dec 09 13:09:27 crc kubenswrapper[4970]: I1209 13:09:27.965092 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf"} err="failed to get container status \"1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf\": rpc error: code = NotFound desc = could not find container \"1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf\": container with ID starting with 1a8b97b50bf90ad7dbbbeac37988a96fa995c930ca0aedef8f5cbe11c37d2aaf not found: ID does not exist" Dec 09 13:09:29 crc kubenswrapper[4970]: I1209 13:09:29.813092 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:09:29 crc kubenswrapper[4970]: E1209 13:09:29.815077 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:09:29 crc kubenswrapper[4970]: I1209 13:09:29.825874 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" path="/var/lib/kubelet/pods/69343a58-2578-40b1-af0f-55a5b4cb4049/volumes" Dec 09 13:09:30 crc kubenswrapper[4970]: E1209 13:09:30.887577 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:33 crc kubenswrapper[4970]: E1209 13:09:33.814122 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:09:39 crc kubenswrapper[4970]: E1209 13:09:39.815584 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:09:40 crc kubenswrapper[4970]: I1209 13:09:40.814216 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:09:40 crc kubenswrapper[4970]: E1209 13:09:40.815734 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:09:41 crc kubenswrapper[4970]: E1209 13:09:41.204411 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:41 crc kubenswrapper[4970]: E1209 13:09:41.758747 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:47 crc kubenswrapper[4970]: E1209 13:09:47.827453 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:09:48 crc kubenswrapper[4970]: E1209 13:09:48.249231 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:48 crc kubenswrapper[4970]: E1209 13:09:48.249810 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:51 crc kubenswrapper[4970]: E1209 13:09:51.250951 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:52 crc kubenswrapper[4970]: I1209 13:09:52.814387 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:09:52 crc kubenswrapper[4970]: E1209 13:09:52.815488 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:09:54 crc kubenswrapper[4970]: E1209 13:09:54.816087 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:09:57 crc kubenswrapper[4970]: E1209 13:09:57.030010 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:09:59 crc kubenswrapper[4970]: E1209 13:09:59.815744 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:10:01 crc kubenswrapper[4970]: E1209 13:10:01.304006 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:10:05 crc kubenswrapper[4970]: I1209 13:10:05.814051 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:10:05 crc kubenswrapper[4970]: E1209 13:10:05.815464 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:10:07 crc kubenswrapper[4970]: E1209 13:10:07.834661 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:10:11 crc kubenswrapper[4970]: E1209 13:10:11.690934 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:10:11 crc kubenswrapper[4970]: E1209 13:10:11.751505 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:10:13 crc kubenswrapper[4970]: E1209 13:10:13.815930 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:10:19 crc kubenswrapper[4970]: I1209 13:10:19.813931 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:10:19 crc kubenswrapper[4970]: E1209 13:10:19.815450 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:10:21 crc kubenswrapper[4970]: E1209 13:10:21.818510 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:10:21 crc kubenswrapper[4970]: E1209 13:10:21.995290 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache]" Dec 09 13:10:26 crc kubenswrapper[4970]: E1209 13:10:26.802489 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice/crio-1435ed9e8834f7fcd4b65da679a8b271a832ddfa8e8d8edc33950782b61118f8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69343a58_2578_40b1_af0f_55a5b4cb4049.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:10:27 crc kubenswrapper[4970]: E1209 13:10:27.831719 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:10:32 crc kubenswrapper[4970]: I1209 13:10:32.813558 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:10:32 crc kubenswrapper[4970]: E1209 13:10:32.815427 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:10:34 crc kubenswrapper[4970]: E1209 13:10:34.815293 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:10:39 crc kubenswrapper[4970]: E1209 13:10:39.819653 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:10:44 crc kubenswrapper[4970]: I1209 13:10:44.813585 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:10:44 crc kubenswrapper[4970]: E1209 13:10:44.814226 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:10:46 crc kubenswrapper[4970]: E1209 13:10:46.817053 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:10:54 crc kubenswrapper[4970]: E1209 13:10:54.816003 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:10:58 crc kubenswrapper[4970]: I1209 13:10:58.814408 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:10:58 crc kubenswrapper[4970]: E1209 13:10:58.815667 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:10:58 crc kubenswrapper[4970]: E1209 13:10:58.815713 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:11:05 crc kubenswrapper[4970]: E1209 13:11:05.816958 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:11:09 crc kubenswrapper[4970]: E1209 13:11:09.815715 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:11:11 crc kubenswrapper[4970]: I1209 13:11:11.813593 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:11:11 crc kubenswrapper[4970]: E1209 13:11:11.814859 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:11:18 crc kubenswrapper[4970]: E1209 13:11:18.819051 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:11:23 crc kubenswrapper[4970]: E1209 13:11:23.816481 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:11:25 crc kubenswrapper[4970]: I1209 13:11:25.812477 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:11:25 crc kubenswrapper[4970]: E1209 13:11:25.813139 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:11:33 crc kubenswrapper[4970]: E1209 13:11:33.814907 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:11:37 crc kubenswrapper[4970]: E1209 13:11:37.826390 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:11:40 crc kubenswrapper[4970]: I1209 13:11:40.813681 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:11:40 crc kubenswrapper[4970]: E1209 13:11:40.814842 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:11:42 crc kubenswrapper[4970]: I1209 13:11:42.129869 4970 scope.go:117] "RemoveContainer" containerID="802ef965f40c60cb72a098a4e8b99a793c5057ca2521f92726bb9377992513af" Dec 09 13:11:44 crc kubenswrapper[4970]: E1209 13:11:44.818307 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:11:52 crc kubenswrapper[4970]: E1209 13:11:52.815598 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:11:55 crc kubenswrapper[4970]: I1209 13:11:55.813291 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:11:55 crc kubenswrapper[4970]: E1209 13:11:55.813795 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:11:56 crc kubenswrapper[4970]: E1209 13:11:56.816694 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:12:04 crc kubenswrapper[4970]: E1209 13:12:04.815793 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:12:09 crc kubenswrapper[4970]: I1209 13:12:09.813672 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:12:09 crc kubenswrapper[4970]: E1209 13:12:09.814816 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:12:11 crc kubenswrapper[4970]: E1209 13:12:11.816931 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:12:18 crc kubenswrapper[4970]: E1209 13:12:18.818290 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:12:22 crc kubenswrapper[4970]: I1209 13:12:22.813604 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:12:23 crc kubenswrapper[4970]: I1209 13:12:23.191899 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"337e0c068266381b4f247b6f01ee6c0c8d836da5e7c6980083c2bc38ba22673b"} Dec 09 13:12:26 crc kubenswrapper[4970]: E1209 13:12:26.815423 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:12:29 crc kubenswrapper[4970]: E1209 13:12:29.815256 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:12:37 crc kubenswrapper[4970]: I1209 13:12:37.383830 4970 generic.go:334] "Generic (PLEG): container finished" podID="711dabf9-95cd-423b-ad4b-2273f430d8f2" containerID="f3f8b83a3958b133264d0d38a9739482c822e451943db4366481ef10e40c5649" exitCode=2 Dec 09 13:12:37 crc kubenswrapper[4970]: I1209 13:12:37.383913 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" event={"ID":"711dabf9-95cd-423b-ad4b-2273f430d8f2","Type":"ContainerDied","Data":"f3f8b83a3958b133264d0d38a9739482c822e451943db4366481ef10e40c5649"} Dec 09 13:12:38 crc kubenswrapper[4970]: E1209 13:12:38.814173 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.089717 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.257727 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj789\" (UniqueName: \"kubernetes.io/projected/711dabf9-95cd-423b-ad4b-2273f430d8f2-kube-api-access-lj789\") pod \"711dabf9-95cd-423b-ad4b-2273f430d8f2\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.257848 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-ssh-key\") pod \"711dabf9-95cd-423b-ad4b-2273f430d8f2\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.258096 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-inventory\") pod \"711dabf9-95cd-423b-ad4b-2273f430d8f2\" (UID: \"711dabf9-95cd-423b-ad4b-2273f430d8f2\") " Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.265192 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711dabf9-95cd-423b-ad4b-2273f430d8f2-kube-api-access-lj789" (OuterVolumeSpecName: "kube-api-access-lj789") pod "711dabf9-95cd-423b-ad4b-2273f430d8f2" (UID: "711dabf9-95cd-423b-ad4b-2273f430d8f2"). InnerVolumeSpecName "kube-api-access-lj789". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.295459 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "711dabf9-95cd-423b-ad4b-2273f430d8f2" (UID: "711dabf9-95cd-423b-ad4b-2273f430d8f2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.324391 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-inventory" (OuterVolumeSpecName: "inventory") pod "711dabf9-95cd-423b-ad4b-2273f430d8f2" (UID: "711dabf9-95cd-423b-ad4b-2273f430d8f2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.361205 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj789\" (UniqueName: \"kubernetes.io/projected/711dabf9-95cd-423b-ad4b-2273f430d8f2-kube-api-access-lj789\") on node \"crc\" DevicePath \"\"" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.361270 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.361287 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/711dabf9-95cd-423b-ad4b-2273f430d8f2-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.410800 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" event={"ID":"711dabf9-95cd-423b-ad4b-2273f430d8f2","Type":"ContainerDied","Data":"3607a72ff925d68d60f70b0b27e5c582996782af63d7a50cf27746b26bf930a8"} Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.410838 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3607a72ff925d68d60f70b0b27e5c582996782af63d7a50cf27746b26bf930a8" Dec 09 13:12:39 crc kubenswrapper[4970]: I1209 13:12:39.410889 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk" Dec 09 13:12:42 crc kubenswrapper[4970]: I1209 13:12:42.189458 4970 scope.go:117] "RemoveContainer" containerID="10e1d04c4b1aac117144b3eb501607698b23cde340256da59be11c0d0270f83b" Dec 09 13:12:42 crc kubenswrapper[4970]: I1209 13:12:42.217588 4970 scope.go:117] "RemoveContainer" containerID="d540838dd11e19d37303a0ee2ce626990c4247bce2ada10649e0348236caac29" Dec 09 13:12:42 crc kubenswrapper[4970]: E1209 13:12:42.815496 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:12:50 crc kubenswrapper[4970]: E1209 13:12:50.815443 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:12:55 crc kubenswrapper[4970]: E1209 13:12:55.815208 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:13:02 crc kubenswrapper[4970]: E1209 13:13:02.815724 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:13:06 crc kubenswrapper[4970]: I1209 13:13:06.756102 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d67f963f-36c2-4056-8b35-5a08e547ba33" containerName="galera" probeResult="failure" output="command timed out" Dec 09 13:13:06 crc kubenswrapper[4970]: I1209 13:13:06.756268 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d67f963f-36c2-4056-8b35-5a08e547ba33" containerName="galera" probeResult="failure" output="command timed out" Dec 09 13:13:06 crc kubenswrapper[4970]: E1209 13:13:06.815846 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:13:15 crc kubenswrapper[4970]: E1209 13:13:15.816669 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:13:19 crc kubenswrapper[4970]: E1209 13:13:19.815989 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.645893 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2c9kz"] Dec 09 13:13:20 crc kubenswrapper[4970]: E1209 13:13:20.646915 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="registry-server" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.646944 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="registry-server" Dec 09 13:13:20 crc kubenswrapper[4970]: E1209 13:13:20.646972 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711dabf9-95cd-423b-ad4b-2273f430d8f2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.646986 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="711dabf9-95cd-423b-ad4b-2273f430d8f2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:13:20 crc kubenswrapper[4970]: E1209 13:13:20.647054 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="extract-content" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.647067 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="extract-content" Dec 09 13:13:20 crc kubenswrapper[4970]: E1209 13:13:20.647100 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="extract-utilities" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.647113 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="extract-utilities" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.647565 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="69343a58-2578-40b1-af0f-55a5b4cb4049" containerName="registry-server" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.647619 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="711dabf9-95cd-423b-ad4b-2273f430d8f2" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.650479 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.665444 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c9kz"] Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.798643 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj9n9\" (UniqueName: \"kubernetes.io/projected/c2039513-08a6-42b6-b097-818d93b54586-kube-api-access-vj9n9\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.799310 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-catalog-content\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.799398 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-utilities\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.901752 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-catalog-content\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.901802 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-utilities\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.901883 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj9n9\" (UniqueName: \"kubernetes.io/projected/c2039513-08a6-42b6-b097-818d93b54586-kube-api-access-vj9n9\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.902364 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-catalog-content\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.902757 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-utilities\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.923775 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj9n9\" (UniqueName: \"kubernetes.io/projected/c2039513-08a6-42b6-b097-818d93b54586-kube-api-access-vj9n9\") pod \"redhat-operators-2c9kz\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:20 crc kubenswrapper[4970]: I1209 13:13:20.975158 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:21 crc kubenswrapper[4970]: I1209 13:13:21.455542 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2c9kz"] Dec 09 13:13:21 crc kubenswrapper[4970]: I1209 13:13:21.909110 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2039513-08a6-42b6-b097-818d93b54586" containerID="24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3" exitCode=0 Dec 09 13:13:21 crc kubenswrapper[4970]: I1209 13:13:21.909408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerDied","Data":"24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3"} Dec 09 13:13:21 crc kubenswrapper[4970]: I1209 13:13:21.909482 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerStarted","Data":"b7250d75ad3971f7be5c08c510bdf07fb73fe732cdaeb71708ce959ef372f32c"} Dec 09 13:13:22 crc kubenswrapper[4970]: I1209 13:13:22.922180 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerStarted","Data":"ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6"} Dec 09 13:13:26 crc kubenswrapper[4970]: E1209 13:13:26.817827 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:13:27 crc kubenswrapper[4970]: I1209 13:13:27.978673 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2039513-08a6-42b6-b097-818d93b54586" containerID="ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6" exitCode=0 Dec 09 13:13:27 crc kubenswrapper[4970]: I1209 13:13:27.978728 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerDied","Data":"ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6"} Dec 09 13:13:28 crc kubenswrapper[4970]: I1209 13:13:28.990965 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerStarted","Data":"376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e"} Dec 09 13:13:29 crc kubenswrapper[4970]: I1209 13:13:29.014905 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2c9kz" podStartSLOduration=2.530266036 podStartE2EDuration="9.014888558s" podCreationTimestamp="2025-12-09 13:13:20 +0000 UTC" firstStartedPulling="2025-12-09 13:13:21.911430829 +0000 UTC m=+4014.471911880" lastFinishedPulling="2025-12-09 13:13:28.396053351 +0000 UTC m=+4020.956534402" observedRunningTime="2025-12-09 13:13:29.005427152 +0000 UTC m=+4021.565908193" watchObservedRunningTime="2025-12-09 13:13:29.014888558 +0000 UTC m=+4021.575369609" Dec 09 13:13:30 crc kubenswrapper[4970]: I1209 13:13:30.975950 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:30 crc kubenswrapper[4970]: I1209 13:13:30.976463 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:32 crc kubenswrapper[4970]: I1209 13:13:32.511993 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2c9kz" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="registry-server" probeResult="failure" output=< Dec 09 13:13:32 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 13:13:32 crc kubenswrapper[4970]: > Dec 09 13:13:32 crc kubenswrapper[4970]: E1209 13:13:32.814955 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:13:40 crc kubenswrapper[4970]: I1209 13:13:40.815215 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:13:40 crc kubenswrapper[4970]: E1209 13:13:40.952398 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:13:40 crc kubenswrapper[4970]: E1209 13:13:40.952472 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:13:40 crc kubenswrapper[4970]: E1209 13:13:40.952628 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:13:40 crc kubenswrapper[4970]: E1209 13:13:40.954191 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:13:42 crc kubenswrapper[4970]: I1209 13:13:42.060906 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2c9kz" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="registry-server" probeResult="failure" output=< Dec 09 13:13:42 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 13:13:42 crc kubenswrapper[4970]: > Dec 09 13:13:44 crc kubenswrapper[4970]: E1209 13:13:44.815457 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:13:46 crc kubenswrapper[4970]: I1209 13:13:46.756443 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d67f963f-36c2-4056-8b35-5a08e547ba33" containerName="galera" probeResult="failure" output="command timed out" Dec 09 13:13:51 crc kubenswrapper[4970]: I1209 13:13:51.055755 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:51 crc kubenswrapper[4970]: I1209 13:13:51.112100 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:51 crc kubenswrapper[4970]: E1209 13:13:51.815690 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:13:51 crc kubenswrapper[4970]: I1209 13:13:51.841758 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c9kz"] Dec 09 13:13:52 crc kubenswrapper[4970]: I1209 13:13:52.289838 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2c9kz" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="registry-server" containerID="cri-o://376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e" gracePeriod=2 Dec 09 13:13:52 crc kubenswrapper[4970]: I1209 13:13:52.983821 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.109339 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj9n9\" (UniqueName: \"kubernetes.io/projected/c2039513-08a6-42b6-b097-818d93b54586-kube-api-access-vj9n9\") pod \"c2039513-08a6-42b6-b097-818d93b54586\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.109499 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-catalog-content\") pod \"c2039513-08a6-42b6-b097-818d93b54586\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.109805 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-utilities\") pod \"c2039513-08a6-42b6-b097-818d93b54586\" (UID: \"c2039513-08a6-42b6-b097-818d93b54586\") " Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.110726 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-utilities" (OuterVolumeSpecName: "utilities") pod "c2039513-08a6-42b6-b097-818d93b54586" (UID: "c2039513-08a6-42b6-b097-818d93b54586"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.111648 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.117790 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2039513-08a6-42b6-b097-818d93b54586-kube-api-access-vj9n9" (OuterVolumeSpecName: "kube-api-access-vj9n9") pod "c2039513-08a6-42b6-b097-818d93b54586" (UID: "c2039513-08a6-42b6-b097-818d93b54586"). InnerVolumeSpecName "kube-api-access-vj9n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.214755 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj9n9\" (UniqueName: \"kubernetes.io/projected/c2039513-08a6-42b6-b097-818d93b54586-kube-api-access-vj9n9\") on node \"crc\" DevicePath \"\"" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.276827 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2039513-08a6-42b6-b097-818d93b54586" (UID: "c2039513-08a6-42b6-b097-818d93b54586"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.315818 4970 generic.go:334] "Generic (PLEG): container finished" podID="c2039513-08a6-42b6-b097-818d93b54586" containerID="376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e" exitCode=0 Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.315977 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerDied","Data":"376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e"} Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.317034 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2c9kz" event={"ID":"c2039513-08a6-42b6-b097-818d93b54586","Type":"ContainerDied","Data":"b7250d75ad3971f7be5c08c510bdf07fb73fe732cdaeb71708ce959ef372f32c"} Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.317111 4970 scope.go:117] "RemoveContainer" containerID="376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.316037 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2c9kz" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.316646 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2039513-08a6-42b6-b097-818d93b54586-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.359438 4970 scope.go:117] "RemoveContainer" containerID="ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.374097 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2c9kz"] Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.388707 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2c9kz"] Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.390816 4970 scope.go:117] "RemoveContainer" containerID="24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.446717 4970 scope.go:117] "RemoveContainer" containerID="376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e" Dec 09 13:13:53 crc kubenswrapper[4970]: E1209 13:13:53.447564 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e\": container with ID starting with 376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e not found: ID does not exist" containerID="376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.447606 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e"} err="failed to get container status \"376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e\": rpc error: code = NotFound desc = could not find container \"376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e\": container with ID starting with 376791c06fb8d3bb20351fb3d999920d0e8df873893b3457035bd397f0ca009e not found: ID does not exist" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.447632 4970 scope.go:117] "RemoveContainer" containerID="ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6" Dec 09 13:13:53 crc kubenswrapper[4970]: E1209 13:13:53.448106 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6\": container with ID starting with ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6 not found: ID does not exist" containerID="ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.448181 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6"} err="failed to get container status \"ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6\": rpc error: code = NotFound desc = could not find container \"ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6\": container with ID starting with ca618d9b7d0ea39ebf4cf91e87be844a5ac5c5e6e85982a1849a3b2e238648c6 not found: ID does not exist" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.448237 4970 scope.go:117] "RemoveContainer" containerID="24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3" Dec 09 13:13:53 crc kubenswrapper[4970]: E1209 13:13:53.448763 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3\": container with ID starting with 24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3 not found: ID does not exist" containerID="24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.448796 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3"} err="failed to get container status \"24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3\": rpc error: code = NotFound desc = could not find container \"24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3\": container with ID starting with 24b8be79425fa6b6bf6723cbbf1faa56e327cb7a63eb49cd6fa59c2a2d7c68c3 not found: ID does not exist" Dec 09 13:13:53 crc kubenswrapper[4970]: I1209 13:13:53.824560 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2039513-08a6-42b6-b097-818d93b54586" path="/var/lib/kubelet/pods/c2039513-08a6-42b6-b097-818d93b54586/volumes" Dec 09 13:13:57 crc kubenswrapper[4970]: E1209 13:13:57.824104 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:14:02 crc kubenswrapper[4970]: E1209 13:14:02.816845 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:14:08 crc kubenswrapper[4970]: E1209 13:14:08.820595 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:14:13 crc kubenswrapper[4970]: E1209 13:14:13.816837 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:14:19 crc kubenswrapper[4970]: E1209 13:14:19.972512 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:14:19 crc kubenswrapper[4970]: E1209 13:14:19.972972 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:14:19 crc kubenswrapper[4970]: E1209 13:14:19.973104 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:14:19 crc kubenswrapper[4970]: E1209 13:14:19.974372 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:14:25 crc kubenswrapper[4970]: E1209 13:14:25.817175 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:14:30 crc kubenswrapper[4970]: E1209 13:14:30.818938 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:14:39 crc kubenswrapper[4970]: E1209 13:14:39.815473 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:14:44 crc kubenswrapper[4970]: E1209 13:14:44.815524 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:14:46 crc kubenswrapper[4970]: I1209 13:14:46.010811 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:14:46 crc kubenswrapper[4970]: I1209 13:14:46.011412 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.166088 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4qpmr"] Dec 09 13:14:50 crc kubenswrapper[4970]: E1209 13:14:50.167101 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="extract-utilities" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.167120 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="extract-utilities" Dec 09 13:14:50 crc kubenswrapper[4970]: E1209 13:14:50.167175 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="extract-content" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.167182 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="extract-content" Dec 09 13:14:50 crc kubenswrapper[4970]: E1209 13:14:50.167195 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="registry-server" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.167203 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="registry-server" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.167477 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2039513-08a6-42b6-b097-818d93b54586" containerName="registry-server" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.169239 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.217293 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qpmr"] Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.298373 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-utilities\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.298731 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-catalog-content\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.298915 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p28j9\" (UniqueName: \"kubernetes.io/projected/137bd751-88fa-4dfa-a6dc-917e7109b82e-kube-api-access-p28j9\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.401375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-utilities\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.401472 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-catalog-content\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.401530 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p28j9\" (UniqueName: \"kubernetes.io/projected/137bd751-88fa-4dfa-a6dc-917e7109b82e-kube-api-access-p28j9\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.402368 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-utilities\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.402411 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-catalog-content\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.437423 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p28j9\" (UniqueName: \"kubernetes.io/projected/137bd751-88fa-4dfa-a6dc-917e7109b82e-kube-api-access-p28j9\") pod \"certified-operators-4qpmr\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:50 crc kubenswrapper[4970]: I1209 13:14:50.506723 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:14:51 crc kubenswrapper[4970]: I1209 13:14:51.109125 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4qpmr"] Dec 09 13:14:52 crc kubenswrapper[4970]: I1209 13:14:52.100834 4970 generic.go:334] "Generic (PLEG): container finished" podID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerID="ceb22ef134fd7a30d6f2e15f81c4f1139db3e0244ac56d3f886bf5906b5d43b5" exitCode=0 Dec 09 13:14:52 crc kubenswrapper[4970]: I1209 13:14:52.100889 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerDied","Data":"ceb22ef134fd7a30d6f2e15f81c4f1139db3e0244ac56d3f886bf5906b5d43b5"} Dec 09 13:14:52 crc kubenswrapper[4970]: I1209 13:14:52.101110 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerStarted","Data":"7c0711d379e0b107693c7fe3c3ea6dec17a45f7633bbf26bc03489aab4f78498"} Dec 09 13:14:53 crc kubenswrapper[4970]: I1209 13:14:53.121844 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerStarted","Data":"0f504fc3255698e30a701cbc9dad753468f0f1fb9cc7d981b1af579b1fbb8c95"} Dec 09 13:14:53 crc kubenswrapper[4970]: E1209 13:14:53.815356 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:14:54 crc kubenswrapper[4970]: I1209 13:14:54.141678 4970 generic.go:334] "Generic (PLEG): container finished" podID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerID="0f504fc3255698e30a701cbc9dad753468f0f1fb9cc7d981b1af579b1fbb8c95" exitCode=0 Dec 09 13:14:54 crc kubenswrapper[4970]: I1209 13:14:54.141726 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerDied","Data":"0f504fc3255698e30a701cbc9dad753468f0f1fb9cc7d981b1af579b1fbb8c95"} Dec 09 13:14:55 crc kubenswrapper[4970]: I1209 13:14:55.171032 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerStarted","Data":"5a7ac8cba795fb90f835f9571dd7335a4e871819ad5d44ba10e735f30ca59770"} Dec 09 13:14:55 crc kubenswrapper[4970]: I1209 13:14:55.203693 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4qpmr" podStartSLOduration=2.7864407460000002 podStartE2EDuration="5.203671767s" podCreationTimestamp="2025-12-09 13:14:50 +0000 UTC" firstStartedPulling="2025-12-09 13:14:52.103430015 +0000 UTC m=+4104.663911066" lastFinishedPulling="2025-12-09 13:14:54.520660986 +0000 UTC m=+4107.081142087" observedRunningTime="2025-12-09 13:14:55.194908001 +0000 UTC m=+4107.755389052" watchObservedRunningTime="2025-12-09 13:14:55.203671767 +0000 UTC m=+4107.764152828" Dec 09 13:14:59 crc kubenswrapper[4970]: E1209 13:14:59.815389 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.199052 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx"] Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.202130 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.205129 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.206337 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.210864 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx"] Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.307745 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q52s9\" (UniqueName: \"kubernetes.io/projected/61c328a1-9f96-43db-99d7-3db50f63896a-kube-api-access-q52s9\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.307809 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61c328a1-9f96-43db-99d7-3db50f63896a-secret-volume\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.307852 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c328a1-9f96-43db-99d7-3db50f63896a-config-volume\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.409968 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q52s9\" (UniqueName: \"kubernetes.io/projected/61c328a1-9f96-43db-99d7-3db50f63896a-kube-api-access-q52s9\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.410044 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61c328a1-9f96-43db-99d7-3db50f63896a-secret-volume\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.410102 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c328a1-9f96-43db-99d7-3db50f63896a-config-volume\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.411288 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c328a1-9f96-43db-99d7-3db50f63896a-config-volume\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.458912 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q52s9\" (UniqueName: \"kubernetes.io/projected/61c328a1-9f96-43db-99d7-3db50f63896a-kube-api-access-q52s9\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.472850 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61c328a1-9f96-43db-99d7-3db50f63896a-secret-volume\") pod \"collect-profiles-29421435-96fqx\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.506964 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.507027 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.536422 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:00 crc kubenswrapper[4970]: I1209 13:15:00.567625 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:15:01 crc kubenswrapper[4970]: I1209 13:15:01.102747 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx"] Dec 09 13:15:01 crc kubenswrapper[4970]: W1209 13:15:01.110186 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61c328a1_9f96_43db_99d7_3db50f63896a.slice/crio-28016989818d60033aa9825ce3031e59690a71e93204e5b22e0e8b3721edb4c0 WatchSource:0}: Error finding container 28016989818d60033aa9825ce3031e59690a71e93204e5b22e0e8b3721edb4c0: Status 404 returned error can't find the container with id 28016989818d60033aa9825ce3031e59690a71e93204e5b22e0e8b3721edb4c0 Dec 09 13:15:01 crc kubenswrapper[4970]: I1209 13:15:01.254262 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" event={"ID":"61c328a1-9f96-43db-99d7-3db50f63896a","Type":"ContainerStarted","Data":"28016989818d60033aa9825ce3031e59690a71e93204e5b22e0e8b3721edb4c0"} Dec 09 13:15:01 crc kubenswrapper[4970]: I1209 13:15:01.322876 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:15:01 crc kubenswrapper[4970]: I1209 13:15:01.381357 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qpmr"] Dec 09 13:15:02 crc kubenswrapper[4970]: I1209 13:15:02.270985 4970 generic.go:334] "Generic (PLEG): container finished" podID="61c328a1-9f96-43db-99d7-3db50f63896a" containerID="ba5cec652b109afa96ad24541bc37acc791fdbc79ca0d3c533bd9b88bc08ac9a" exitCode=0 Dec 09 13:15:02 crc kubenswrapper[4970]: I1209 13:15:02.271638 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" event={"ID":"61c328a1-9f96-43db-99d7-3db50f63896a","Type":"ContainerDied","Data":"ba5cec652b109afa96ad24541bc37acc791fdbc79ca0d3c533bd9b88bc08ac9a"} Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.283916 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4qpmr" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="registry-server" containerID="cri-o://5a7ac8cba795fb90f835f9571dd7335a4e871819ad5d44ba10e735f30ca59770" gracePeriod=2 Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.721580 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.806457 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q52s9\" (UniqueName: \"kubernetes.io/projected/61c328a1-9f96-43db-99d7-3db50f63896a-kube-api-access-q52s9\") pod \"61c328a1-9f96-43db-99d7-3db50f63896a\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.806654 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c328a1-9f96-43db-99d7-3db50f63896a-config-volume\") pod \"61c328a1-9f96-43db-99d7-3db50f63896a\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.806723 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61c328a1-9f96-43db-99d7-3db50f63896a-secret-volume\") pod \"61c328a1-9f96-43db-99d7-3db50f63896a\" (UID: \"61c328a1-9f96-43db-99d7-3db50f63896a\") " Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.807936 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61c328a1-9f96-43db-99d7-3db50f63896a-config-volume" (OuterVolumeSpecName: "config-volume") pod "61c328a1-9f96-43db-99d7-3db50f63896a" (UID: "61c328a1-9f96-43db-99d7-3db50f63896a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.812724 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61c328a1-9f96-43db-99d7-3db50f63896a-kube-api-access-q52s9" (OuterVolumeSpecName: "kube-api-access-q52s9") pod "61c328a1-9f96-43db-99d7-3db50f63896a" (UID: "61c328a1-9f96-43db-99d7-3db50f63896a"). InnerVolumeSpecName "kube-api-access-q52s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.813157 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61c328a1-9f96-43db-99d7-3db50f63896a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "61c328a1-9f96-43db-99d7-3db50f63896a" (UID: "61c328a1-9f96-43db-99d7-3db50f63896a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.910467 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/61c328a1-9f96-43db-99d7-3db50f63896a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.910590 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q52s9\" (UniqueName: \"kubernetes.io/projected/61c328a1-9f96-43db-99d7-3db50f63896a-kube-api-access-q52s9\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:03 crc kubenswrapper[4970]: I1209 13:15:03.910650 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c328a1-9f96-43db-99d7-3db50f63896a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.298109 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" event={"ID":"61c328a1-9f96-43db-99d7-3db50f63896a","Type":"ContainerDied","Data":"28016989818d60033aa9825ce3031e59690a71e93204e5b22e0e8b3721edb4c0"} Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.298150 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28016989818d60033aa9825ce3031e59690a71e93204e5b22e0e8b3721edb4c0" Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.298198 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421435-96fqx" Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.300991 4970 generic.go:334] "Generic (PLEG): container finished" podID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerID="5a7ac8cba795fb90f835f9571dd7335a4e871819ad5d44ba10e735f30ca59770" exitCode=0 Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.301045 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerDied","Data":"5a7ac8cba795fb90f835f9571dd7335a4e871819ad5d44ba10e735f30ca59770"} Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.808810 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2"] Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.821160 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421390-2f5q2"] Dec 09 13:15:04 crc kubenswrapper[4970]: I1209 13:15:04.955938 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.052347 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-utilities\") pod \"137bd751-88fa-4dfa-a6dc-917e7109b82e\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.052641 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p28j9\" (UniqueName: \"kubernetes.io/projected/137bd751-88fa-4dfa-a6dc-917e7109b82e-kube-api-access-p28j9\") pod \"137bd751-88fa-4dfa-a6dc-917e7109b82e\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.052953 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-catalog-content\") pod \"137bd751-88fa-4dfa-a6dc-917e7109b82e\" (UID: \"137bd751-88fa-4dfa-a6dc-917e7109b82e\") " Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.053227 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-utilities" (OuterVolumeSpecName: "utilities") pod "137bd751-88fa-4dfa-a6dc-917e7109b82e" (UID: "137bd751-88fa-4dfa-a6dc-917e7109b82e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.054025 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.072573 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137bd751-88fa-4dfa-a6dc-917e7109b82e-kube-api-access-p28j9" (OuterVolumeSpecName: "kube-api-access-p28j9") pod "137bd751-88fa-4dfa-a6dc-917e7109b82e" (UID: "137bd751-88fa-4dfa-a6dc-917e7109b82e"). InnerVolumeSpecName "kube-api-access-p28j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.118098 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "137bd751-88fa-4dfa-a6dc-917e7109b82e" (UID: "137bd751-88fa-4dfa-a6dc-917e7109b82e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.157559 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p28j9\" (UniqueName: \"kubernetes.io/projected/137bd751-88fa-4dfa-a6dc-917e7109b82e-kube-api-access-p28j9\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.157784 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/137bd751-88fa-4dfa-a6dc-917e7109b82e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.317589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4qpmr" event={"ID":"137bd751-88fa-4dfa-a6dc-917e7109b82e","Type":"ContainerDied","Data":"7c0711d379e0b107693c7fe3c3ea6dec17a45f7633bbf26bc03489aab4f78498"} Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.317661 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4qpmr" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.318815 4970 scope.go:117] "RemoveContainer" containerID="5a7ac8cba795fb90f835f9571dd7335a4e871819ad5d44ba10e735f30ca59770" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.365842 4970 scope.go:117] "RemoveContainer" containerID="0f504fc3255698e30a701cbc9dad753468f0f1fb9cc7d981b1af579b1fbb8c95" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.380295 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4qpmr"] Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.399939 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4qpmr"] Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.413826 4970 scope.go:117] "RemoveContainer" containerID="ceb22ef134fd7a30d6f2e15f81c4f1139db3e0244ac56d3f886bf5906b5d43b5" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.837517 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" path="/var/lib/kubelet/pods/137bd751-88fa-4dfa-a6dc-917e7109b82e/volumes" Dec 09 13:15:05 crc kubenswrapper[4970]: I1209 13:15:05.839693 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59978ade-1525-4f29-908f-026970955862" path="/var/lib/kubelet/pods/59978ade-1525-4f29-908f-026970955862/volumes" Dec 09 13:15:07 crc kubenswrapper[4970]: E1209 13:15:07.836105 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:15:12 crc kubenswrapper[4970]: E1209 13:15:12.816640 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.010629 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.011416 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.051136 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q"] Dec 09 13:15:16 crc kubenswrapper[4970]: E1209 13:15:16.051876 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="extract-content" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.051910 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="extract-content" Dec 09 13:15:16 crc kubenswrapper[4970]: E1209 13:15:16.051934 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61c328a1-9f96-43db-99d7-3db50f63896a" containerName="collect-profiles" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.051948 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="61c328a1-9f96-43db-99d7-3db50f63896a" containerName="collect-profiles" Dec 09 13:15:16 crc kubenswrapper[4970]: E1209 13:15:16.051970 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="registry-server" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.051981 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="registry-server" Dec 09 13:15:16 crc kubenswrapper[4970]: E1209 13:15:16.052032 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="extract-utilities" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.052043 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="extract-utilities" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.052484 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="137bd751-88fa-4dfa-a6dc-917e7109b82e" containerName="registry-server" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.052530 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="61c328a1-9f96-43db-99d7-3db50f63896a" containerName="collect-profiles" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.054013 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.057418 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.057890 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.058038 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.058939 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.072828 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q"] Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.186505 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.186565 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxzh\" (UniqueName: \"kubernetes.io/projected/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-kube-api-access-prxzh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.186591 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.288649 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.288959 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.289012 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxzh\" (UniqueName: \"kubernetes.io/projected/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-kube-api-access-prxzh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.305152 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.305153 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.314815 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxzh\" (UniqueName: \"kubernetes.io/projected/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-kube-api-access-prxzh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-84h7q\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:16 crc kubenswrapper[4970]: I1209 13:15:16.402158 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:15:17 crc kubenswrapper[4970]: I1209 13:15:17.053882 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q"] Dec 09 13:15:17 crc kubenswrapper[4970]: W1209 13:15:17.060462 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode86845e6_6bd5_4fb3_9a63_7c6f4d730644.slice/crio-bd14d9904650a85e5cebc0938dfe778ef65d278c35796f135ccaac15439b3c0a WatchSource:0}: Error finding container bd14d9904650a85e5cebc0938dfe778ef65d278c35796f135ccaac15439b3c0a: Status 404 returned error can't find the container with id bd14d9904650a85e5cebc0938dfe778ef65d278c35796f135ccaac15439b3c0a Dec 09 13:15:17 crc kubenswrapper[4970]: I1209 13:15:17.507710 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" event={"ID":"e86845e6-6bd5-4fb3-9a63-7c6f4d730644","Type":"ContainerStarted","Data":"bd14d9904650a85e5cebc0938dfe778ef65d278c35796f135ccaac15439b3c0a"} Dec 09 13:15:18 crc kubenswrapper[4970]: I1209 13:15:18.522684 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" event={"ID":"e86845e6-6bd5-4fb3-9a63-7c6f4d730644","Type":"ContainerStarted","Data":"3ff300e347c5cdd4657ead74ca2982c626bdbbd1420b49a53af3c0fad1d9d760"} Dec 09 13:15:18 crc kubenswrapper[4970]: I1209 13:15:18.568001 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" podStartSLOduration=2.13667221 podStartE2EDuration="2.567808582s" podCreationTimestamp="2025-12-09 13:15:16 +0000 UTC" firstStartedPulling="2025-12-09 13:15:17.062950984 +0000 UTC m=+4129.623432035" lastFinishedPulling="2025-12-09 13:15:17.494087336 +0000 UTC m=+4130.054568407" observedRunningTime="2025-12-09 13:15:18.551074842 +0000 UTC m=+4131.111555923" watchObservedRunningTime="2025-12-09 13:15:18.567808582 +0000 UTC m=+4131.128289643" Dec 09 13:15:20 crc kubenswrapper[4970]: E1209 13:15:20.814762 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:15:24 crc kubenswrapper[4970]: E1209 13:15:24.818392 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:15:33 crc kubenswrapper[4970]: E1209 13:15:33.815332 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:15:35 crc kubenswrapper[4970]: E1209 13:15:35.814716 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:15:42 crc kubenswrapper[4970]: I1209 13:15:42.422322 4970 scope.go:117] "RemoveContainer" containerID="1a3152fd12a89b1ff62feb46e9c0713323d18ae205387f4679e6d5a4dc1e77cb" Dec 09 13:15:44 crc kubenswrapper[4970]: E1209 13:15:44.819613 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.010634 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.010957 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.011000 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.011976 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"337e0c068266381b4f247b6f01ee6c0c8d836da5e7c6980083c2bc38ba22673b"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.012039 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://337e0c068266381b4f247b6f01ee6c0c8d836da5e7c6980083c2bc38ba22673b" gracePeriod=600 Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.926157 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="337e0c068266381b4f247b6f01ee6c0c8d836da5e7c6980083c2bc38ba22673b" exitCode=0 Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.926656 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"337e0c068266381b4f247b6f01ee6c0c8d836da5e7c6980083c2bc38ba22673b"} Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.926687 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2"} Dec 09 13:15:46 crc kubenswrapper[4970]: I1209 13:15:46.926712 4970 scope.go:117] "RemoveContainer" containerID="b52590bfcf83aef355cce3d9df54a920b86a238618ae472d4c9d5510ccd01139" Dec 09 13:15:49 crc kubenswrapper[4970]: E1209 13:15:49.815967 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:15:58 crc kubenswrapper[4970]: E1209 13:15:58.816157 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:16:00 crc kubenswrapper[4970]: E1209 13:16:00.816276 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.592005 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hrvtw"] Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.595718 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.604116 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrvtw"] Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.628644 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-utilities\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.628732 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb5xg\" (UniqueName: \"kubernetes.io/projected/83a7d87f-c893-46cf-a7f4-ff33c2917d16-kube-api-access-jb5xg\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.628850 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-catalog-content\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.731566 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-utilities\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.731831 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb5xg\" (UniqueName: \"kubernetes.io/projected/83a7d87f-c893-46cf-a7f4-ff33c2917d16-kube-api-access-jb5xg\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.731890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-catalog-content\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.732024 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-utilities\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.732372 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-catalog-content\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.749981 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb5xg\" (UniqueName: \"kubernetes.io/projected/83a7d87f-c893-46cf-a7f4-ff33c2917d16-kube-api-access-jb5xg\") pod \"community-operators-hrvtw\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:08 crc kubenswrapper[4970]: I1209 13:16:08.936560 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:09 crc kubenswrapper[4970]: I1209 13:16:09.602564 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrvtw"] Dec 09 13:16:09 crc kubenswrapper[4970]: W1209 13:16:09.602633 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83a7d87f_c893_46cf_a7f4_ff33c2917d16.slice/crio-e79ddae331016e59da7f44fff5acab6fd80e8bab92d5ec22e32fe631f62ebc49 WatchSource:0}: Error finding container e79ddae331016e59da7f44fff5acab6fd80e8bab92d5ec22e32fe631f62ebc49: Status 404 returned error can't find the container with id e79ddae331016e59da7f44fff5acab6fd80e8bab92d5ec22e32fe631f62ebc49 Dec 09 13:16:09 crc kubenswrapper[4970]: E1209 13:16:09.813594 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:16:10 crc kubenswrapper[4970]: I1209 13:16:10.234560 4970 generic.go:334] "Generic (PLEG): container finished" podID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerID="8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047" exitCode=0 Dec 09 13:16:10 crc kubenswrapper[4970]: I1209 13:16:10.234631 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerDied","Data":"8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047"} Dec 09 13:16:10 crc kubenswrapper[4970]: I1209 13:16:10.234897 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerStarted","Data":"e79ddae331016e59da7f44fff5acab6fd80e8bab92d5ec22e32fe631f62ebc49"} Dec 09 13:16:12 crc kubenswrapper[4970]: I1209 13:16:12.257072 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerStarted","Data":"93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb"} Dec 09 13:16:13 crc kubenswrapper[4970]: I1209 13:16:13.275355 4970 generic.go:334] "Generic (PLEG): container finished" podID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerID="93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb" exitCode=0 Dec 09 13:16:13 crc kubenswrapper[4970]: I1209 13:16:13.275429 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerDied","Data":"93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb"} Dec 09 13:16:13 crc kubenswrapper[4970]: E1209 13:16:13.818025 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:16:14 crc kubenswrapper[4970]: I1209 13:16:14.292492 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerStarted","Data":"a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1"} Dec 09 13:16:14 crc kubenswrapper[4970]: I1209 13:16:14.333940 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hrvtw" podStartSLOduration=2.789970308 podStartE2EDuration="6.333915231s" podCreationTimestamp="2025-12-09 13:16:08 +0000 UTC" firstStartedPulling="2025-12-09 13:16:10.237094839 +0000 UTC m=+4182.797575890" lastFinishedPulling="2025-12-09 13:16:13.781039742 +0000 UTC m=+4186.341520813" observedRunningTime="2025-12-09 13:16:14.316496862 +0000 UTC m=+4186.876977923" watchObservedRunningTime="2025-12-09 13:16:14.333915231 +0000 UTC m=+4186.894396292" Dec 09 13:16:18 crc kubenswrapper[4970]: I1209 13:16:18.937357 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:18 crc kubenswrapper[4970]: I1209 13:16:18.938157 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:19 crc kubenswrapper[4970]: I1209 13:16:19.076585 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:19 crc kubenswrapper[4970]: I1209 13:16:19.413741 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:19 crc kubenswrapper[4970]: I1209 13:16:19.462265 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrvtw"] Dec 09 13:16:21 crc kubenswrapper[4970]: I1209 13:16:21.385515 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hrvtw" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="registry-server" containerID="cri-o://a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1" gracePeriod=2 Dec 09 13:16:21 crc kubenswrapper[4970]: I1209 13:16:21.965635 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.088305 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-utilities\") pod \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.088620 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-catalog-content\") pod \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.088721 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb5xg\" (UniqueName: \"kubernetes.io/projected/83a7d87f-c893-46cf-a7f4-ff33c2917d16-kube-api-access-jb5xg\") pod \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\" (UID: \"83a7d87f-c893-46cf-a7f4-ff33c2917d16\") " Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.089220 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-utilities" (OuterVolumeSpecName: "utilities") pod "83a7d87f-c893-46cf-a7f4-ff33c2917d16" (UID: "83a7d87f-c893-46cf-a7f4-ff33c2917d16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.090209 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.094734 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83a7d87f-c893-46cf-a7f4-ff33c2917d16-kube-api-access-jb5xg" (OuterVolumeSpecName: "kube-api-access-jb5xg") pod "83a7d87f-c893-46cf-a7f4-ff33c2917d16" (UID: "83a7d87f-c893-46cf-a7f4-ff33c2917d16"). InnerVolumeSpecName "kube-api-access-jb5xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.150518 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83a7d87f-c893-46cf-a7f4-ff33c2917d16" (UID: "83a7d87f-c893-46cf-a7f4-ff33c2917d16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.192162 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83a7d87f-c893-46cf-a7f4-ff33c2917d16-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.192209 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb5xg\" (UniqueName: \"kubernetes.io/projected/83a7d87f-c893-46cf-a7f4-ff33c2917d16-kube-api-access-jb5xg\") on node \"crc\" DevicePath \"\"" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.399591 4970 generic.go:334] "Generic (PLEG): container finished" podID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerID="a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1" exitCode=0 Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.399729 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerDied","Data":"a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1"} Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.399919 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrvtw" event={"ID":"83a7d87f-c893-46cf-a7f4-ff33c2917d16","Type":"ContainerDied","Data":"e79ddae331016e59da7f44fff5acab6fd80e8bab92d5ec22e32fe631f62ebc49"} Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.399791 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrvtw" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.399937 4970 scope.go:117] "RemoveContainer" containerID="a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.451443 4970 scope.go:117] "RemoveContainer" containerID="93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.459709 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrvtw"] Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.472404 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hrvtw"] Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.482824 4970 scope.go:117] "RemoveContainer" containerID="8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.546572 4970 scope.go:117] "RemoveContainer" containerID="a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1" Dec 09 13:16:22 crc kubenswrapper[4970]: E1209 13:16:22.547182 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1\": container with ID starting with a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1 not found: ID does not exist" containerID="a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.547242 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1"} err="failed to get container status \"a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1\": rpc error: code = NotFound desc = could not find container \"a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1\": container with ID starting with a8304287b861c77f97cd23553deeb11f6121032879d827b121c1723e0b5a13f1 not found: ID does not exist" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.547309 4970 scope.go:117] "RemoveContainer" containerID="93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb" Dec 09 13:16:22 crc kubenswrapper[4970]: E1209 13:16:22.547684 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb\": container with ID starting with 93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb not found: ID does not exist" containerID="93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.547744 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb"} err="failed to get container status \"93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb\": rpc error: code = NotFound desc = could not find container \"93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb\": container with ID starting with 93caf0e8fbf976a5aa2e9cc4d7a0e25e8524548811c62ea01247c479575edbdb not found: ID does not exist" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.547781 4970 scope.go:117] "RemoveContainer" containerID="8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047" Dec 09 13:16:22 crc kubenswrapper[4970]: E1209 13:16:22.548159 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047\": container with ID starting with 8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047 not found: ID does not exist" containerID="8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047" Dec 09 13:16:22 crc kubenswrapper[4970]: I1209 13:16:22.548200 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047"} err="failed to get container status \"8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047\": rpc error: code = NotFound desc = could not find container \"8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047\": container with ID starting with 8234c04337313b8528108b0478f8e0e1393aece9387f988a1cffad8785f1f047 not found: ID does not exist" Dec 09 13:16:22 crc kubenswrapper[4970]: E1209 13:16:22.816049 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:16:23 crc kubenswrapper[4970]: I1209 13:16:23.836216 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" path="/var/lib/kubelet/pods/83a7d87f-c893-46cf-a7f4-ff33c2917d16/volumes" Dec 09 13:16:28 crc kubenswrapper[4970]: E1209 13:16:28.822820 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:16:36 crc kubenswrapper[4970]: E1209 13:16:36.817025 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:16:39 crc kubenswrapper[4970]: E1209 13:16:39.814519 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:16:47 crc kubenswrapper[4970]: E1209 13:16:47.822962 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:16:52 crc kubenswrapper[4970]: E1209 13:16:52.818066 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:16:59 crc kubenswrapper[4970]: E1209 13:16:59.814774 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:17:06 crc kubenswrapper[4970]: E1209 13:17:06.815579 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:17:13 crc kubenswrapper[4970]: E1209 13:17:13.817201 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:17:18 crc kubenswrapper[4970]: E1209 13:17:18.816765 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:17:25 crc kubenswrapper[4970]: E1209 13:17:25.817312 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:17:33 crc kubenswrapper[4970]: E1209 13:17:33.816832 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:17:40 crc kubenswrapper[4970]: E1209 13:17:40.813672 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:17:45 crc kubenswrapper[4970]: E1209 13:17:45.817957 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:17:46 crc kubenswrapper[4970]: I1209 13:17:46.010655 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:17:46 crc kubenswrapper[4970]: I1209 13:17:46.010738 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:17:55 crc kubenswrapper[4970]: E1209 13:17:55.814418 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:18:00 crc kubenswrapper[4970]: E1209 13:18:00.817286 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:18:09 crc kubenswrapper[4970]: E1209 13:18:09.816178 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:18:11 crc kubenswrapper[4970]: E1209 13:18:11.815014 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:18:16 crc kubenswrapper[4970]: I1209 13:18:16.011094 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:18:16 crc kubenswrapper[4970]: I1209 13:18:16.011815 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:18:23 crc kubenswrapper[4970]: E1209 13:18:23.816863 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:18:24 crc kubenswrapper[4970]: E1209 13:18:24.816692 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:18:35 crc kubenswrapper[4970]: E1209 13:18:35.814779 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:18:37 crc kubenswrapper[4970]: E1209 13:18:37.822197 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.011340 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.012188 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.012265 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.013292 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.013356 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" gracePeriod=600 Dec 09 13:18:46 crc kubenswrapper[4970]: E1209 13:18:46.140954 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.299762 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" exitCode=0 Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.299810 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2"} Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.299847 4970 scope.go:117] "RemoveContainer" containerID="337e0c068266381b4f247b6f01ee6c0c8d836da5e7c6980083c2bc38ba22673b" Dec 09 13:18:46 crc kubenswrapper[4970]: I1209 13:18:46.301382 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:18:46 crc kubenswrapper[4970]: E1209 13:18:46.302690 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:18:47 crc kubenswrapper[4970]: I1209 13:18:47.821765 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:18:47 crc kubenswrapper[4970]: E1209 13:18:47.952829 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:18:47 crc kubenswrapper[4970]: E1209 13:18:47.953240 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:18:47 crc kubenswrapper[4970]: E1209 13:18:47.953365 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:18:47 crc kubenswrapper[4970]: E1209 13:18:47.954513 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:18:52 crc kubenswrapper[4970]: E1209 13:18:52.816007 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:18:58 crc kubenswrapper[4970]: E1209 13:18:58.816577 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:18:59 crc kubenswrapper[4970]: I1209 13:18:59.813913 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:18:59 crc kubenswrapper[4970]: E1209 13:18:59.815241 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:19:05 crc kubenswrapper[4970]: E1209 13:19:05.817431 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:19:12 crc kubenswrapper[4970]: I1209 13:19:12.813698 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:19:12 crc kubenswrapper[4970]: E1209 13:19:12.819532 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:19:12 crc kubenswrapper[4970]: E1209 13:19:12.820083 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:19:18 crc kubenswrapper[4970]: E1209 13:19:18.817709 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:19:23 crc kubenswrapper[4970]: E1209 13:19:23.818177 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:19:25 crc kubenswrapper[4970]: I1209 13:19:25.813405 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:19:25 crc kubenswrapper[4970]: E1209 13:19:25.814961 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:19:29 crc kubenswrapper[4970]: E1209 13:19:29.953089 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:19:29 crc kubenswrapper[4970]: E1209 13:19:29.953861 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:19:29 crc kubenswrapper[4970]: E1209 13:19:29.954074 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:19:29 crc kubenswrapper[4970]: E1209 13:19:29.955394 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:19:37 crc kubenswrapper[4970]: I1209 13:19:37.826997 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:19:37 crc kubenswrapper[4970]: E1209 13:19:37.828124 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:19:38 crc kubenswrapper[4970]: E1209 13:19:38.815960 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:19:42 crc kubenswrapper[4970]: E1209 13:19:42.817439 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:19:51 crc kubenswrapper[4970]: I1209 13:19:51.813322 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:19:51 crc kubenswrapper[4970]: E1209 13:19:51.814519 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:19:51 crc kubenswrapper[4970]: E1209 13:19:51.816988 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:19:54 crc kubenswrapper[4970]: E1209 13:19:54.819032 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.682974 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9z554"] Dec 09 13:19:56 crc kubenswrapper[4970]: E1209 13:19:56.684737 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="extract-content" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.684771 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="extract-content" Dec 09 13:19:56 crc kubenswrapper[4970]: E1209 13:19:56.684842 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="extract-utilities" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.684861 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="extract-utilities" Dec 09 13:19:56 crc kubenswrapper[4970]: E1209 13:19:56.684911 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="registry-server" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.684929 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="registry-server" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.685569 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="83a7d87f-c893-46cf-a7f4-ff33c2917d16" containerName="registry-server" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.689746 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.694894 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z554"] Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.864010 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-utilities\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.864426 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr26c\" (UniqueName: \"kubernetes.io/projected/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-kube-api-access-gr26c\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.864459 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-catalog-content\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.966344 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr26c\" (UniqueName: \"kubernetes.io/projected/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-kube-api-access-gr26c\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.966403 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-catalog-content\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.966665 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-utilities\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.967119 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-utilities\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.967120 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-catalog-content\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:56 crc kubenswrapper[4970]: I1209 13:19:56.987030 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr26c\" (UniqueName: \"kubernetes.io/projected/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-kube-api-access-gr26c\") pod \"redhat-marketplace-9z554\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:57 crc kubenswrapper[4970]: I1209 13:19:57.021880 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:19:57 crc kubenswrapper[4970]: I1209 13:19:57.541166 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z554"] Dec 09 13:19:58 crc kubenswrapper[4970]: I1209 13:19:58.259616 4970 generic.go:334] "Generic (PLEG): container finished" podID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerID="1202525636b70060406d71ac954ac323b2635088c9733b982c396962bbe9f022" exitCode=0 Dec 09 13:19:58 crc kubenswrapper[4970]: I1209 13:19:58.259742 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerDied","Data":"1202525636b70060406d71ac954ac323b2635088c9733b982c396962bbe9f022"} Dec 09 13:19:58 crc kubenswrapper[4970]: I1209 13:19:58.259925 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerStarted","Data":"018b1b1f3a3d0e11f69673eeb8c1bab6926f3f7fdb5387407b2652cf4cfd51e0"} Dec 09 13:19:59 crc kubenswrapper[4970]: I1209 13:19:59.274500 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerStarted","Data":"14efec2e18f0ec68c5f81f1c67d2d9b9a176fe668aaa0e924864edd80252860a"} Dec 09 13:20:00 crc kubenswrapper[4970]: I1209 13:20:00.291989 4970 generic.go:334] "Generic (PLEG): container finished" podID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerID="14efec2e18f0ec68c5f81f1c67d2d9b9a176fe668aaa0e924864edd80252860a" exitCode=0 Dec 09 13:20:00 crc kubenswrapper[4970]: I1209 13:20:00.292055 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerDied","Data":"14efec2e18f0ec68c5f81f1c67d2d9b9a176fe668aaa0e924864edd80252860a"} Dec 09 13:20:01 crc kubenswrapper[4970]: I1209 13:20:01.310622 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerStarted","Data":"67d7f41372190f79b2b66667a4ac5697bfad48c169004cb8070e9958062a5d37"} Dec 09 13:20:01 crc kubenswrapper[4970]: I1209 13:20:01.343656 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9z554" podStartSLOduration=2.706587819 podStartE2EDuration="5.343637667s" podCreationTimestamp="2025-12-09 13:19:56 +0000 UTC" firstStartedPulling="2025-12-09 13:19:58.262447127 +0000 UTC m=+4410.822928218" lastFinishedPulling="2025-12-09 13:20:00.899497015 +0000 UTC m=+4413.459978066" observedRunningTime="2025-12-09 13:20:01.336111865 +0000 UTC m=+4413.896592936" watchObservedRunningTime="2025-12-09 13:20:01.343637667 +0000 UTC m=+4413.904118718" Dec 09 13:20:05 crc kubenswrapper[4970]: I1209 13:20:05.814120 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:20:05 crc kubenswrapper[4970]: E1209 13:20:05.816137 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:20:05 crc kubenswrapper[4970]: E1209 13:20:05.817994 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:20:07 crc kubenswrapper[4970]: I1209 13:20:07.023286 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:20:07 crc kubenswrapper[4970]: I1209 13:20:07.026856 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:20:07 crc kubenswrapper[4970]: I1209 13:20:07.095644 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:20:07 crc kubenswrapper[4970]: I1209 13:20:07.460616 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:20:07 crc kubenswrapper[4970]: I1209 13:20:07.515702 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z554"] Dec 09 13:20:09 crc kubenswrapper[4970]: I1209 13:20:09.417672 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9z554" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="registry-server" containerID="cri-o://67d7f41372190f79b2b66667a4ac5697bfad48c169004cb8070e9958062a5d37" gracePeriod=2 Dec 09 13:20:09 crc kubenswrapper[4970]: E1209 13:20:09.816555 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:20:10 crc kubenswrapper[4970]: I1209 13:20:10.431013 4970 generic.go:334] "Generic (PLEG): container finished" podID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerID="67d7f41372190f79b2b66667a4ac5697bfad48c169004cb8070e9958062a5d37" exitCode=0 Dec 09 13:20:10 crc kubenswrapper[4970]: I1209 13:20:10.431102 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerDied","Data":"67d7f41372190f79b2b66667a4ac5697bfad48c169004cb8070e9958062a5d37"} Dec 09 13:20:10 crc kubenswrapper[4970]: I1209 13:20:10.431413 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9z554" event={"ID":"adab57c1-ea9d-4f0f-b5d9-25144e9ea671","Type":"ContainerDied","Data":"018b1b1f3a3d0e11f69673eeb8c1bab6926f3f7fdb5387407b2652cf4cfd51e0"} Dec 09 13:20:10 crc kubenswrapper[4970]: I1209 13:20:10.431431 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018b1b1f3a3d0e11f69673eeb8c1bab6926f3f7fdb5387407b2652cf4cfd51e0" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.593169 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.665948 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr26c\" (UniqueName: \"kubernetes.io/projected/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-kube-api-access-gr26c\") pod \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.666038 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-catalog-content\") pod \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.666101 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-utilities\") pod \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\" (UID: \"adab57c1-ea9d-4f0f-b5d9-25144e9ea671\") " Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.666987 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-utilities" (OuterVolumeSpecName: "utilities") pod "adab57c1-ea9d-4f0f-b5d9-25144e9ea671" (UID: "adab57c1-ea9d-4f0f-b5d9-25144e9ea671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.673057 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-kube-api-access-gr26c" (OuterVolumeSpecName: "kube-api-access-gr26c") pod "adab57c1-ea9d-4f0f-b5d9-25144e9ea671" (UID: "adab57c1-ea9d-4f0f-b5d9-25144e9ea671"). InnerVolumeSpecName "kube-api-access-gr26c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.686499 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "adab57c1-ea9d-4f0f-b5d9-25144e9ea671" (UID: "adab57c1-ea9d-4f0f-b5d9-25144e9ea671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.768465 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.768771 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr26c\" (UniqueName: \"kubernetes.io/projected/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-kube-api-access-gr26c\") on node \"crc\" DevicePath \"\"" Dec 09 13:20:11 crc kubenswrapper[4970]: I1209 13:20:11.768880 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/adab57c1-ea9d-4f0f-b5d9-25144e9ea671-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:20:11 crc kubenswrapper[4970]: E1209 13:20:11.990791 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadab57c1_ea9d_4f0f_b5d9_25144e9ea671.slice/crio-018b1b1f3a3d0e11f69673eeb8c1bab6926f3f7fdb5387407b2652cf4cfd51e0\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadab57c1_ea9d_4f0f_b5d9_25144e9ea671.slice\": RecentStats: unable to find data in memory cache]" Dec 09 13:20:12 crc kubenswrapper[4970]: I1209 13:20:12.451436 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9z554" Dec 09 13:20:12 crc kubenswrapper[4970]: I1209 13:20:12.483061 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z554"] Dec 09 13:20:12 crc kubenswrapper[4970]: I1209 13:20:12.495324 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9z554"] Dec 09 13:20:13 crc kubenswrapper[4970]: I1209 13:20:13.834471 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" path="/var/lib/kubelet/pods/adab57c1-ea9d-4f0f-b5d9-25144e9ea671/volumes" Dec 09 13:20:16 crc kubenswrapper[4970]: I1209 13:20:16.814313 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:20:16 crc kubenswrapper[4970]: E1209 13:20:16.815611 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:20:19 crc kubenswrapper[4970]: E1209 13:20:19.818512 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:20:22 crc kubenswrapper[4970]: E1209 13:20:22.815663 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:20:27 crc kubenswrapper[4970]: I1209 13:20:27.819818 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:20:27 crc kubenswrapper[4970]: E1209 13:20:27.820843 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:20:30 crc kubenswrapper[4970]: E1209 13:20:30.817952 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:20:35 crc kubenswrapper[4970]: E1209 13:20:35.815854 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:20:40 crc kubenswrapper[4970]: I1209 13:20:40.414068 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:20:40 crc kubenswrapper[4970]: E1209 13:20:40.418801 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:20:42 crc kubenswrapper[4970]: E1209 13:20:42.815105 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:20:47 crc kubenswrapper[4970]: E1209 13:20:47.822455 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:20:51 crc kubenswrapper[4970]: I1209 13:20:51.819986 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:20:51 crc kubenswrapper[4970]: E1209 13:20:51.821186 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:20:54 crc kubenswrapper[4970]: E1209 13:20:54.815768 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:21:02 crc kubenswrapper[4970]: E1209 13:21:02.814774 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:21:03 crc kubenswrapper[4970]: I1209 13:21:03.813196 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:21:03 crc kubenswrapper[4970]: E1209 13:21:03.813755 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:21:08 crc kubenswrapper[4970]: E1209 13:21:08.817603 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:21:14 crc kubenswrapper[4970]: I1209 13:21:14.819884 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:21:14 crc kubenswrapper[4970]: E1209 13:21:14.821056 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:21:17 crc kubenswrapper[4970]: E1209 13:21:17.824110 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:21:23 crc kubenswrapper[4970]: E1209 13:21:23.815904 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:21:28 crc kubenswrapper[4970]: I1209 13:21:28.812654 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:21:28 crc kubenswrapper[4970]: E1209 13:21:28.813568 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:21:31 crc kubenswrapper[4970]: E1209 13:21:31.820009 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:21:34 crc kubenswrapper[4970]: E1209 13:21:34.816981 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:21:42 crc kubenswrapper[4970]: I1209 13:21:42.613417 4970 generic.go:334] "Generic (PLEG): container finished" podID="e86845e6-6bd5-4fb3-9a63-7c6f4d730644" containerID="3ff300e347c5cdd4657ead74ca2982c626bdbbd1420b49a53af3c0fad1d9d760" exitCode=2 Dec 09 13:21:42 crc kubenswrapper[4970]: I1209 13:21:42.614015 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" event={"ID":"e86845e6-6bd5-4fb3-9a63-7c6f4d730644","Type":"ContainerDied","Data":"3ff300e347c5cdd4657ead74ca2982c626bdbbd1420b49a53af3c0fad1d9d760"} Dec 09 13:21:43 crc kubenswrapper[4970]: I1209 13:21:43.818433 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:21:43 crc kubenswrapper[4970]: E1209 13:21:43.819018 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:21:43 crc kubenswrapper[4970]: E1209 13:21:43.819605 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.219698 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.342720 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-ssh-key\") pod \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.342914 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-inventory\") pod \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.343019 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxzh\" (UniqueName: \"kubernetes.io/projected/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-kube-api-access-prxzh\") pod \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\" (UID: \"e86845e6-6bd5-4fb3-9a63-7c6f4d730644\") " Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.352571 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-kube-api-access-prxzh" (OuterVolumeSpecName: "kube-api-access-prxzh") pod "e86845e6-6bd5-4fb3-9a63-7c6f4d730644" (UID: "e86845e6-6bd5-4fb3-9a63-7c6f4d730644"). InnerVolumeSpecName "kube-api-access-prxzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.405054 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e86845e6-6bd5-4fb3-9a63-7c6f4d730644" (UID: "e86845e6-6bd5-4fb3-9a63-7c6f4d730644"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.414927 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-inventory" (OuterVolumeSpecName: "inventory") pod "e86845e6-6bd5-4fb3-9a63-7c6f4d730644" (UID: "e86845e6-6bd5-4fb3-9a63-7c6f4d730644"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.446096 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxzh\" (UniqueName: \"kubernetes.io/projected/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-kube-api-access-prxzh\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.446132 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.446143 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e86845e6-6bd5-4fb3-9a63-7c6f4d730644-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.656008 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" event={"ID":"e86845e6-6bd5-4fb3-9a63-7c6f4d730644","Type":"ContainerDied","Data":"bd14d9904650a85e5cebc0938dfe778ef65d278c35796f135ccaac15439b3c0a"} Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.656075 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd14d9904650a85e5cebc0938dfe778ef65d278c35796f135ccaac15439b3c0a" Dec 09 13:21:44 crc kubenswrapper[4970]: I1209 13:21:44.656196 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-84h7q" Dec 09 13:21:45 crc kubenswrapper[4970]: E1209 13:21:45.819584 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:21:54 crc kubenswrapper[4970]: E1209 13:21:54.818988 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:21:56 crc kubenswrapper[4970]: I1209 13:21:56.813185 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:21:56 crc kubenswrapper[4970]: E1209 13:21:56.813996 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:22:00 crc kubenswrapper[4970]: E1209 13:22:00.816047 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:22:06 crc kubenswrapper[4970]: E1209 13:22:06.815924 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:22:08 crc kubenswrapper[4970]: I1209 13:22:08.819545 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:22:08 crc kubenswrapper[4970]: E1209 13:22:08.820613 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:22:11 crc kubenswrapper[4970]: E1209 13:22:11.815755 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:22:20 crc kubenswrapper[4970]: E1209 13:22:20.815988 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:22:22 crc kubenswrapper[4970]: I1209 13:22:22.812161 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:22:22 crc kubenswrapper[4970]: E1209 13:22:22.812706 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:22:25 crc kubenswrapper[4970]: E1209 13:22:25.815810 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:22:33 crc kubenswrapper[4970]: E1209 13:22:33.815883 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:22:34 crc kubenswrapper[4970]: I1209 13:22:34.813492 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:22:34 crc kubenswrapper[4970]: E1209 13:22:34.814210 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:22:36 crc kubenswrapper[4970]: E1209 13:22:36.814744 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:22:45 crc kubenswrapper[4970]: E1209 13:22:45.816673 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:22:47 crc kubenswrapper[4970]: I1209 13:22:47.830276 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:22:47 crc kubenswrapper[4970]: E1209 13:22:47.831196 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:22:49 crc kubenswrapper[4970]: E1209 13:22:49.816550 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:22:58 crc kubenswrapper[4970]: I1209 13:22:58.815760 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:22:58 crc kubenswrapper[4970]: E1209 13:22:58.817011 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:23:00 crc kubenswrapper[4970]: E1209 13:23:00.817207 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:23:04 crc kubenswrapper[4970]: E1209 13:23:04.819242 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:23:10 crc kubenswrapper[4970]: I1209 13:23:10.812387 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:23:10 crc kubenswrapper[4970]: E1209 13:23:10.813229 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:23:13 crc kubenswrapper[4970]: E1209 13:23:13.817752 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:23:18 crc kubenswrapper[4970]: E1209 13:23:18.820109 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:23:22 crc kubenswrapper[4970]: I1209 13:23:22.813455 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:23:22 crc kubenswrapper[4970]: E1209 13:23:22.816665 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:23:25 crc kubenswrapper[4970]: E1209 13:23:25.817539 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:23:32 crc kubenswrapper[4970]: E1209 13:23:32.815532 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:23:33 crc kubenswrapper[4970]: I1209 13:23:33.813773 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:23:33 crc kubenswrapper[4970]: E1209 13:23:33.814708 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:23:40 crc kubenswrapper[4970]: E1209 13:23:40.815687 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:23:47 crc kubenswrapper[4970]: E1209 13:23:47.828849 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:23:48 crc kubenswrapper[4970]: I1209 13:23:48.814662 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:23:49 crc kubenswrapper[4970]: I1209 13:23:49.349355 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"1881639a78781bcbfa52ff00bf5db8c4a9ec0f40874f26ed2176e2197d3b24e2"} Dec 09 13:23:51 crc kubenswrapper[4970]: E1209 13:23:51.814970 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:24:00 crc kubenswrapper[4970]: I1209 13:24:00.816833 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:24:00 crc kubenswrapper[4970]: E1209 13:24:00.927144 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:24:00 crc kubenswrapper[4970]: E1209 13:24:00.927207 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:24:00 crc kubenswrapper[4970]: E1209 13:24:00.927410 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:24:00 crc kubenswrapper[4970]: E1209 13:24:00.928689 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:24:05 crc kubenswrapper[4970]: E1209 13:24:05.816121 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:24:13 crc kubenswrapper[4970]: E1209 13:24:13.817134 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:24:20 crc kubenswrapper[4970]: E1209 13:24:20.818014 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:24:27 crc kubenswrapper[4970]: E1209 13:24:27.837034 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:24:31 crc kubenswrapper[4970]: E1209 13:24:31.941751 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:24:31 crc kubenswrapper[4970]: E1209 13:24:31.942459 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:24:31 crc kubenswrapper[4970]: E1209 13:24:31.942684 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:24:31 crc kubenswrapper[4970]: E1209 13:24:31.943894 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:24:39 crc kubenswrapper[4970]: E1209 13:24:39.820552 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.775168 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f45gl"] Dec 09 13:24:41 crc kubenswrapper[4970]: E1209 13:24:41.775903 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="registry-server" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.775929 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="registry-server" Dec 09 13:24:41 crc kubenswrapper[4970]: E1209 13:24:41.775954 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="extract-content" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.775963 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="extract-content" Dec 09 13:24:41 crc kubenswrapper[4970]: E1209 13:24:41.775986 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="extract-utilities" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.775995 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="extract-utilities" Dec 09 13:24:41 crc kubenswrapper[4970]: E1209 13:24:41.776015 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86845e6-6bd5-4fb3-9a63-7c6f4d730644" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.776024 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86845e6-6bd5-4fb3-9a63-7c6f4d730644" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.776349 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86845e6-6bd5-4fb3-9a63-7c6f4d730644" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.776390 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="adab57c1-ea9d-4f0f-b5d9-25144e9ea671" containerName="registry-server" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.778670 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.798398 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f45gl"] Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.812998 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwhcz\" (UniqueName: \"kubernetes.io/projected/b97b6381-d7f1-4fcc-8930-34598b257999-kube-api-access-rwhcz\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.813095 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97b6381-d7f1-4fcc-8930-34598b257999-catalog-content\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.813112 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97b6381-d7f1-4fcc-8930-34598b257999-utilities\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.914747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwhcz\" (UniqueName: \"kubernetes.io/projected/b97b6381-d7f1-4fcc-8930-34598b257999-kube-api-access-rwhcz\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.914860 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97b6381-d7f1-4fcc-8930-34598b257999-catalog-content\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.914879 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97b6381-d7f1-4fcc-8930-34598b257999-utilities\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.915456 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b97b6381-d7f1-4fcc-8930-34598b257999-catalog-content\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.915498 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b97b6381-d7f1-4fcc-8930-34598b257999-utilities\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:41 crc kubenswrapper[4970]: I1209 13:24:41.934207 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwhcz\" (UniqueName: \"kubernetes.io/projected/b97b6381-d7f1-4fcc-8930-34598b257999-kube-api-access-rwhcz\") pod \"redhat-operators-f45gl\" (UID: \"b97b6381-d7f1-4fcc-8930-34598b257999\") " pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:42 crc kubenswrapper[4970]: I1209 13:24:42.116386 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:24:42 crc kubenswrapper[4970]: I1209 13:24:42.651415 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f45gl"] Dec 09 13:24:43 crc kubenswrapper[4970]: I1209 13:24:43.003029 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f45gl" event={"ID":"b97b6381-d7f1-4fcc-8930-34598b257999","Type":"ContainerStarted","Data":"50acf1fc156d0a6c6759da66b6fb5c576b4d2046461597bedeaa4bc39e33895d"} Dec 09 13:24:43 crc kubenswrapper[4970]: E1209 13:24:43.814890 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:24:44 crc kubenswrapper[4970]: I1209 13:24:44.018794 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f45gl" event={"ID":"b97b6381-d7f1-4fcc-8930-34598b257999","Type":"ContainerStarted","Data":"93734d236195f93b8b3734d87e0dd1c19b578b52cf344ff0a52afc25bfa200b9"} Dec 09 13:24:45 crc kubenswrapper[4970]: I1209 13:24:45.063655 4970 generic.go:334] "Generic (PLEG): container finished" podID="b97b6381-d7f1-4fcc-8930-34598b257999" containerID="93734d236195f93b8b3734d87e0dd1c19b578b52cf344ff0a52afc25bfa200b9" exitCode=0 Dec 09 13:24:45 crc kubenswrapper[4970]: I1209 13:24:45.063818 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f45gl" event={"ID":"b97b6381-d7f1-4fcc-8930-34598b257999","Type":"ContainerDied","Data":"93734d236195f93b8b3734d87e0dd1c19b578b52cf344ff0a52afc25bfa200b9"} Dec 09 13:24:52 crc kubenswrapper[4970]: E1209 13:24:52.816082 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:24:54 crc kubenswrapper[4970]: E1209 13:24:54.815334 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:24:56 crc kubenswrapper[4970]: I1209 13:24:56.194097 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f45gl" event={"ID":"b97b6381-d7f1-4fcc-8930-34598b257999","Type":"ContainerStarted","Data":"29ac77dab941e193c4074cb36fdfcf6d5781a84a5ce12ce908afbda68ae2ae50"} Dec 09 13:24:57 crc kubenswrapper[4970]: E1209 13:24:57.028007 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97b6381_d7f1_4fcc_8930_34598b257999.slice/crio-29ac77dab941e193c4074cb36fdfcf6d5781a84a5ce12ce908afbda68ae2ae50.scope\": RecentStats: unable to find data in memory cache]" Dec 09 13:24:58 crc kubenswrapper[4970]: I1209 13:24:58.216322 4970 generic.go:334] "Generic (PLEG): container finished" podID="b97b6381-d7f1-4fcc-8930-34598b257999" containerID="29ac77dab941e193c4074cb36fdfcf6d5781a84a5ce12ce908afbda68ae2ae50" exitCode=0 Dec 09 13:24:58 crc kubenswrapper[4970]: I1209 13:24:58.216370 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f45gl" event={"ID":"b97b6381-d7f1-4fcc-8930-34598b257999","Type":"ContainerDied","Data":"29ac77dab941e193c4074cb36fdfcf6d5781a84a5ce12ce908afbda68ae2ae50"} Dec 09 13:25:02 crc kubenswrapper[4970]: I1209 13:25:02.275686 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f45gl" event={"ID":"b97b6381-d7f1-4fcc-8930-34598b257999","Type":"ContainerStarted","Data":"b6348015848bb96942e798979999c9870c8e6ffe72b02e2c47e73d2ebcbb6141"} Dec 09 13:25:03 crc kubenswrapper[4970]: E1209 13:25:03.814800 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:25:05 crc kubenswrapper[4970]: E1209 13:25:05.814894 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.117404 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.117831 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.176860 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.216327 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f45gl" podStartSLOduration=14.528909467 podStartE2EDuration="31.216309901s" podCreationTimestamp="2025-12-09 13:24:41 +0000 UTC" firstStartedPulling="2025-12-09 13:24:45.067754572 +0000 UTC m=+4697.628235633" lastFinishedPulling="2025-12-09 13:25:01.755154996 +0000 UTC m=+4714.315636067" observedRunningTime="2025-12-09 13:25:02.297378027 +0000 UTC m=+4714.857859078" watchObservedRunningTime="2025-12-09 13:25:12.216309901 +0000 UTC m=+4724.776790952" Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.433746 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f45gl" Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.828093 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f45gl"] Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.973353 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 13:25:12 crc kubenswrapper[4970]: I1209 13:25:12.973820 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2h8bq" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="registry-server" containerID="cri-o://cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e" gracePeriod=2 Dec 09 13:25:13 crc kubenswrapper[4970]: I1209 13:25:13.389561 4970 generic.go:334] "Generic (PLEG): container finished" podID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerID="cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e" exitCode=0 Dec 09 13:25:13 crc kubenswrapper[4970]: I1209 13:25:13.389598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerDied","Data":"cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e"} Dec 09 13:25:13 crc kubenswrapper[4970]: E1209 13:25:13.633379 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e is running failed: container process not found" containerID="cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 13:25:13 crc kubenswrapper[4970]: E1209 13:25:13.634100 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e is running failed: container process not found" containerID="cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 13:25:13 crc kubenswrapper[4970]: E1209 13:25:13.634631 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e is running failed: container process not found" containerID="cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 13:25:13 crc kubenswrapper[4970]: E1209 13:25:13.634735 4970 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-2h8bq" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="registry-server" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.237352 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.343764 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-catalog-content\") pod \"30d540ae-51c2-421b-88e8-09f8ab24af89\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.343976 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmvmk\" (UniqueName: \"kubernetes.io/projected/30d540ae-51c2-421b-88e8-09f8ab24af89-kube-api-access-xmvmk\") pod \"30d540ae-51c2-421b-88e8-09f8ab24af89\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.344034 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-utilities\") pod \"30d540ae-51c2-421b-88e8-09f8ab24af89\" (UID: \"30d540ae-51c2-421b-88e8-09f8ab24af89\") " Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.344542 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-utilities" (OuterVolumeSpecName: "utilities") pod "30d540ae-51c2-421b-88e8-09f8ab24af89" (UID: "30d540ae-51c2-421b-88e8-09f8ab24af89"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.345337 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.351920 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30d540ae-51c2-421b-88e8-09f8ab24af89-kube-api-access-xmvmk" (OuterVolumeSpecName: "kube-api-access-xmvmk") pod "30d540ae-51c2-421b-88e8-09f8ab24af89" (UID: "30d540ae-51c2-421b-88e8-09f8ab24af89"). InnerVolumeSpecName "kube-api-access-xmvmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.402755 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2h8bq" event={"ID":"30d540ae-51c2-421b-88e8-09f8ab24af89","Type":"ContainerDied","Data":"845daaf3b8dc4b7b9ad4ca1f5b6d860430911123b4557729a739d9f14f78b4a5"} Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.402821 4970 scope.go:117] "RemoveContainer" containerID="cd241cfee30772da59444403b1e40107f793c198fa2e8c0432371a5017157f5e" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.402831 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2h8bq" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.447726 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmvmk\" (UniqueName: \"kubernetes.io/projected/30d540ae-51c2-421b-88e8-09f8ab24af89-kube-api-access-xmvmk\") on node \"crc\" DevicePath \"\"" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.448836 4970 scope.go:117] "RemoveContainer" containerID="57e75a7cbc884a37ae6902813b9b022eeec9ee1e03fe1c40974eee1bce417094" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.459737 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30d540ae-51c2-421b-88e8-09f8ab24af89" (UID: "30d540ae-51c2-421b-88e8-09f8ab24af89"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.480026 4970 scope.go:117] "RemoveContainer" containerID="13db9125f852997306965a7059406547cc9437b8da084a2b13fc785eae0423c8" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.550364 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30d540ae-51c2-421b-88e8-09f8ab24af89-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.747733 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 13:25:14 crc kubenswrapper[4970]: I1209 13:25:14.762349 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2h8bq"] Dec 09 13:25:15 crc kubenswrapper[4970]: I1209 13:25:15.826548 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" path="/var/lib/kubelet/pods/30d540ae-51c2-421b-88e8-09f8ab24af89/volumes" Dec 09 13:25:17 crc kubenswrapper[4970]: E1209 13:25:17.829341 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:25:18 crc kubenswrapper[4970]: E1209 13:25:18.821099 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:25:29 crc kubenswrapper[4970]: E1209 13:25:29.816895 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:25:33 crc kubenswrapper[4970]: E1209 13:25:33.816711 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:25:40 crc kubenswrapper[4970]: E1209 13:25:40.815206 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.867485 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58rtw"] Dec 09 13:25:41 crc kubenswrapper[4970]: E1209 13:25:41.868782 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="registry-server" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.868809 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="registry-server" Dec 09 13:25:41 crc kubenswrapper[4970]: E1209 13:25:41.868852 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="extract-utilities" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.868862 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="extract-utilities" Dec 09 13:25:41 crc kubenswrapper[4970]: E1209 13:25:41.868884 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="extract-content" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.868893 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="extract-content" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.874343 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="30d540ae-51c2-421b-88e8-09f8ab24af89" containerName="registry-server" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.877209 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58rtw"] Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.877332 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.996515 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-catalog-content\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.996716 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rs9n\" (UniqueName: \"kubernetes.io/projected/67768cdd-3599-4878-9a8e-241302142cef-kube-api-access-7rs9n\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:41 crc kubenswrapper[4970]: I1209 13:25:41.996763 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-utilities\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.098664 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-utilities\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.098840 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-catalog-content\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.098999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rs9n\" (UniqueName: \"kubernetes.io/projected/67768cdd-3599-4878-9a8e-241302142cef-kube-api-access-7rs9n\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.099895 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-utilities\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.100191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-catalog-content\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.557316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rs9n\" (UniqueName: \"kubernetes.io/projected/67768cdd-3599-4878-9a8e-241302142cef-kube-api-access-7rs9n\") pod \"certified-operators-58rtw\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:42 crc kubenswrapper[4970]: I1209 13:25:42.812653 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:43 crc kubenswrapper[4970]: I1209 13:25:43.319817 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58rtw"] Dec 09 13:25:43 crc kubenswrapper[4970]: I1209 13:25:43.803503 4970 generic.go:334] "Generic (PLEG): container finished" podID="67768cdd-3599-4878-9a8e-241302142cef" containerID="a3ee14efaf8e3d98c981f4972a5268eef445ea055689e6bac95d02daff227d87" exitCode=0 Dec 09 13:25:43 crc kubenswrapper[4970]: I1209 13:25:43.803801 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58rtw" event={"ID":"67768cdd-3599-4878-9a8e-241302142cef","Type":"ContainerDied","Data":"a3ee14efaf8e3d98c981f4972a5268eef445ea055689e6bac95d02daff227d87"} Dec 09 13:25:43 crc kubenswrapper[4970]: I1209 13:25:43.804045 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58rtw" event={"ID":"67768cdd-3599-4878-9a8e-241302142cef","Type":"ContainerStarted","Data":"562f6bb91eb3da86186af229c81f506704a8251a5123ae0d3edaac3ce616497e"} Dec 09 13:25:44 crc kubenswrapper[4970]: E1209 13:25:44.816279 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:25:45 crc kubenswrapper[4970]: I1209 13:25:45.836350 4970 generic.go:334] "Generic (PLEG): container finished" podID="67768cdd-3599-4878-9a8e-241302142cef" containerID="39e216b7a3a091c3efd96b6109052fda248917155dd581dfd5afff394ccecc5c" exitCode=0 Dec 09 13:25:45 crc kubenswrapper[4970]: I1209 13:25:45.836444 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58rtw" event={"ID":"67768cdd-3599-4878-9a8e-241302142cef","Type":"ContainerDied","Data":"39e216b7a3a091c3efd96b6109052fda248917155dd581dfd5afff394ccecc5c"} Dec 09 13:25:46 crc kubenswrapper[4970]: I1209 13:25:46.853231 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58rtw" event={"ID":"67768cdd-3599-4878-9a8e-241302142cef","Type":"ContainerStarted","Data":"0bf4cd6c41c0c9a4b6f8e4c046741c7f0edddcdf0acb29cca3fcb384eed7ba72"} Dec 09 13:25:46 crc kubenswrapper[4970]: I1209 13:25:46.883505 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58rtw" podStartSLOduration=3.465386211 podStartE2EDuration="5.883482426s" podCreationTimestamp="2025-12-09 13:25:41 +0000 UTC" firstStartedPulling="2025-12-09 13:25:43.806849769 +0000 UTC m=+4756.367330860" lastFinishedPulling="2025-12-09 13:25:46.224946024 +0000 UTC m=+4758.785427075" observedRunningTime="2025-12-09 13:25:46.870356673 +0000 UTC m=+4759.430837754" watchObservedRunningTime="2025-12-09 13:25:46.883482426 +0000 UTC m=+4759.443963487" Dec 09 13:25:52 crc kubenswrapper[4970]: I1209 13:25:52.813163 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:52 crc kubenswrapper[4970]: I1209 13:25:52.813838 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:52 crc kubenswrapper[4970]: I1209 13:25:52.889566 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:53 crc kubenswrapper[4970]: I1209 13:25:53.006509 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:53 crc kubenswrapper[4970]: I1209 13:25:53.135669 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58rtw"] Dec 09 13:25:53 crc kubenswrapper[4970]: E1209 13:25:53.815906 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:25:54 crc kubenswrapper[4970]: I1209 13:25:54.963739 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58rtw" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="registry-server" containerID="cri-o://0bf4cd6c41c0c9a4b6f8e4c046741c7f0edddcdf0acb29cca3fcb384eed7ba72" gracePeriod=2 Dec 09 13:25:55 crc kubenswrapper[4970]: I1209 13:25:55.978229 4970 generic.go:334] "Generic (PLEG): container finished" podID="67768cdd-3599-4878-9a8e-241302142cef" containerID="0bf4cd6c41c0c9a4b6f8e4c046741c7f0edddcdf0acb29cca3fcb384eed7ba72" exitCode=0 Dec 09 13:25:55 crc kubenswrapper[4970]: I1209 13:25:55.978571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58rtw" event={"ID":"67768cdd-3599-4878-9a8e-241302142cef","Type":"ContainerDied","Data":"0bf4cd6c41c0c9a4b6f8e4c046741c7f0edddcdf0acb29cca3fcb384eed7ba72"} Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.306967 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.484056 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-utilities\") pod \"67768cdd-3599-4878-9a8e-241302142cef\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.484582 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rs9n\" (UniqueName: \"kubernetes.io/projected/67768cdd-3599-4878-9a8e-241302142cef-kube-api-access-7rs9n\") pod \"67768cdd-3599-4878-9a8e-241302142cef\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.484852 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-catalog-content\") pod \"67768cdd-3599-4878-9a8e-241302142cef\" (UID: \"67768cdd-3599-4878-9a8e-241302142cef\") " Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.485543 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-utilities" (OuterVolumeSpecName: "utilities") pod "67768cdd-3599-4878-9a8e-241302142cef" (UID: "67768cdd-3599-4878-9a8e-241302142cef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.485734 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.497170 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67768cdd-3599-4878-9a8e-241302142cef-kube-api-access-7rs9n" (OuterVolumeSpecName: "kube-api-access-7rs9n") pod "67768cdd-3599-4878-9a8e-241302142cef" (UID: "67768cdd-3599-4878-9a8e-241302142cef"). InnerVolumeSpecName "kube-api-access-7rs9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.558785 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67768cdd-3599-4878-9a8e-241302142cef" (UID: "67768cdd-3599-4878-9a8e-241302142cef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.588308 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67768cdd-3599-4878-9a8e-241302142cef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.588347 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rs9n\" (UniqueName: \"kubernetes.io/projected/67768cdd-3599-4878-9a8e-241302142cef-kube-api-access-7rs9n\") on node \"crc\" DevicePath \"\"" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.989423 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58rtw" event={"ID":"67768cdd-3599-4878-9a8e-241302142cef","Type":"ContainerDied","Data":"562f6bb91eb3da86186af229c81f506704a8251a5123ae0d3edaac3ce616497e"} Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.989485 4970 scope.go:117] "RemoveContainer" containerID="0bf4cd6c41c0c9a4b6f8e4c046741c7f0edddcdf0acb29cca3fcb384eed7ba72" Dec 09 13:25:56 crc kubenswrapper[4970]: I1209 13:25:56.989509 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58rtw" Dec 09 13:25:57 crc kubenswrapper[4970]: I1209 13:25:57.023965 4970 scope.go:117] "RemoveContainer" containerID="39e216b7a3a091c3efd96b6109052fda248917155dd581dfd5afff394ccecc5c" Dec 09 13:25:57 crc kubenswrapper[4970]: I1209 13:25:57.026651 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58rtw"] Dec 09 13:25:57 crc kubenswrapper[4970]: I1209 13:25:57.037457 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58rtw"] Dec 09 13:25:57 crc kubenswrapper[4970]: I1209 13:25:57.068561 4970 scope.go:117] "RemoveContainer" containerID="a3ee14efaf8e3d98c981f4972a5268eef445ea055689e6bac95d02daff227d87" Dec 09 13:25:57 crc kubenswrapper[4970]: E1209 13:25:57.824487 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:25:57 crc kubenswrapper[4970]: I1209 13:25:57.836630 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67768cdd-3599-4878-9a8e-241302142cef" path="/var/lib/kubelet/pods/67768cdd-3599-4878-9a8e-241302142cef/volumes" Dec 09 13:26:04 crc kubenswrapper[4970]: E1209 13:26:04.817135 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:26:12 crc kubenswrapper[4970]: E1209 13:26:12.816192 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:26:16 crc kubenswrapper[4970]: I1209 13:26:16.010627 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:26:16 crc kubenswrapper[4970]: I1209 13:26:16.012330 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:26:16 crc kubenswrapper[4970]: E1209 13:26:16.815514 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:26:25 crc kubenswrapper[4970]: E1209 13:26:25.817057 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:26:29 crc kubenswrapper[4970]: E1209 13:26:29.816798 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:26:39 crc kubenswrapper[4970]: E1209 13:26:39.830830 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:26:40 crc kubenswrapper[4970]: E1209 13:26:40.815969 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:26:42 crc kubenswrapper[4970]: I1209 13:26:42.811091 4970 scope.go:117] "RemoveContainer" containerID="67d7f41372190f79b2b66667a4ac5697bfad48c169004cb8070e9958062a5d37" Dec 09 13:26:42 crc kubenswrapper[4970]: I1209 13:26:42.837473 4970 scope.go:117] "RemoveContainer" containerID="1202525636b70060406d71ac954ac323b2635088c9733b982c396962bbe9f022" Dec 09 13:26:43 crc kubenswrapper[4970]: I1209 13:26:43.132491 4970 scope.go:117] "RemoveContainer" containerID="14efec2e18f0ec68c5f81f1c67d2d9b9a176fe668aaa0e924864edd80252860a" Dec 09 13:26:46 crc kubenswrapper[4970]: I1209 13:26:46.011131 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:26:46 crc kubenswrapper[4970]: I1209 13:26:46.011762 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:26:53 crc kubenswrapper[4970]: E1209 13:26:53.816220 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:26:54 crc kubenswrapper[4970]: E1209 13:26:54.815502 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.042065 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd"] Dec 09 13:27:02 crc kubenswrapper[4970]: E1209 13:27:02.043184 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="extract-utilities" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.043201 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="extract-utilities" Dec 09 13:27:02 crc kubenswrapper[4970]: E1209 13:27:02.043216 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="registry-server" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.043226 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="registry-server" Dec 09 13:27:02 crc kubenswrapper[4970]: E1209 13:27:02.043270 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="extract-content" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.043279 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="extract-content" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.043674 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="67768cdd-3599-4878-9a8e-241302142cef" containerName="registry-server" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.044786 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.048994 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.049607 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.050007 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-2x2z5" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.050241 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.052992 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd"] Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.098682 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.098971 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.099186 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqc55\" (UniqueName: \"kubernetes.io/projected/0e9c3ade-02cb-4788-a63c-78e036ff9ede-kube-api-access-dqc55\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.201650 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqc55\" (UniqueName: \"kubernetes.io/projected/0e9c3ade-02cb-4788-a63c-78e036ff9ede-kube-api-access-dqc55\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.201892 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.201917 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.210170 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.210999 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.221326 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqc55\" (UniqueName: \"kubernetes.io/projected/0e9c3ade-02cb-4788-a63c-78e036ff9ede-kube-api-access-dqc55\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:02 crc kubenswrapper[4970]: I1209 13:27:02.387149 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:27:03 crc kubenswrapper[4970]: I1209 13:27:03.029429 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd"] Dec 09 13:27:03 crc kubenswrapper[4970]: I1209 13:27:03.192492 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" event={"ID":"0e9c3ade-02cb-4788-a63c-78e036ff9ede","Type":"ContainerStarted","Data":"e82cc0b782ac527dfd144edf5ba82545114574bf142af3d798624788c3f9b7b4"} Dec 09 13:27:04 crc kubenswrapper[4970]: I1209 13:27:04.218181 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" event={"ID":"0e9c3ade-02cb-4788-a63c-78e036ff9ede","Type":"ContainerStarted","Data":"59fb0996959ebd0ae93fa94201e3615e6aa8f451fd8b2d9013fe4ccc768b9af6"} Dec 09 13:27:04 crc kubenswrapper[4970]: I1209 13:27:04.243061 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" podStartSLOduration=1.75386946 podStartE2EDuration="2.243042204s" podCreationTimestamp="2025-12-09 13:27:02 +0000 UTC" firstStartedPulling="2025-12-09 13:27:03.042399364 +0000 UTC m=+4835.602880455" lastFinishedPulling="2025-12-09 13:27:03.531572138 +0000 UTC m=+4836.092053199" observedRunningTime="2025-12-09 13:27:04.237619258 +0000 UTC m=+4836.798100349" watchObservedRunningTime="2025-12-09 13:27:04.243042204 +0000 UTC m=+4836.803523255" Dec 09 13:27:06 crc kubenswrapper[4970]: E1209 13:27:06.816349 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:27:08 crc kubenswrapper[4970]: E1209 13:27:08.815094 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.011308 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.011963 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.012008 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.013299 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1881639a78781bcbfa52ff00bf5db8c4a9ec0f40874f26ed2176e2197d3b24e2"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.013547 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://1881639a78781bcbfa52ff00bf5db8c4a9ec0f40874f26ed2176e2197d3b24e2" gracePeriod=600 Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.376963 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="1881639a78781bcbfa52ff00bf5db8c4a9ec0f40874f26ed2176e2197d3b24e2" exitCode=0 Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.377117 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"1881639a78781bcbfa52ff00bf5db8c4a9ec0f40874f26ed2176e2197d3b24e2"} Dec 09 13:27:16 crc kubenswrapper[4970]: I1209 13:27:16.377288 4970 scope.go:117] "RemoveContainer" containerID="336872a9cff893b34e07ba7a020e9f30d9d1d6d2a855427e26f26eb12b833ad2" Dec 09 13:27:17 crc kubenswrapper[4970]: I1209 13:27:17.404085 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd"} Dec 09 13:27:21 crc kubenswrapper[4970]: E1209 13:27:21.815413 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:27:23 crc kubenswrapper[4970]: E1209 13:27:23.816785 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:27:35 crc kubenswrapper[4970]: E1209 13:27:35.815916 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:27:37 crc kubenswrapper[4970]: E1209 13:27:37.827746 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:27:47 crc kubenswrapper[4970]: E1209 13:27:47.815552 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:27:50 crc kubenswrapper[4970]: E1209 13:27:50.815530 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:28:01 crc kubenswrapper[4970]: E1209 13:28:01.819155 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:28:04 crc kubenswrapper[4970]: E1209 13:28:04.814049 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:28:13 crc kubenswrapper[4970]: E1209 13:28:13.851707 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:28:15 crc kubenswrapper[4970]: E1209 13:28:15.816040 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:28:24 crc kubenswrapper[4970]: E1209 13:28:24.814176 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:28:30 crc kubenswrapper[4970]: E1209 13:28:30.815651 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:28:37 crc kubenswrapper[4970]: E1209 13:28:37.822955 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:28:44 crc kubenswrapper[4970]: E1209 13:28:44.814789 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:28:48 crc kubenswrapper[4970]: E1209 13:28:48.816294 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:28:57 crc kubenswrapper[4970]: E1209 13:28:57.830784 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:29:01 crc kubenswrapper[4970]: E1209 13:29:01.815417 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:29:11 crc kubenswrapper[4970]: I1209 13:29:11.815122 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:29:11 crc kubenswrapper[4970]: E1209 13:29:11.955505 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:29:11 crc kubenswrapper[4970]: E1209 13:29:11.955576 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:29:11 crc kubenswrapper[4970]: E1209 13:29:11.955718 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:29:11 crc kubenswrapper[4970]: E1209 13:29:11.957215 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:29:16 crc kubenswrapper[4970]: I1209 13:29:16.011272 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:29:16 crc kubenswrapper[4970]: I1209 13:29:16.011849 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:29:16 crc kubenswrapper[4970]: E1209 13:29:16.819300 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:29:26 crc kubenswrapper[4970]: E1209 13:29:26.816377 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:29:30 crc kubenswrapper[4970]: E1209 13:29:30.816716 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.002351 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j86xc"] Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.005618 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.023925 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j86xc"] Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.186977 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n4t\" (UniqueName: \"kubernetes.io/projected/366ca468-074f-48ea-a339-8b033f899eb2-kube-api-access-97n4t\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.187139 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-catalog-content\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.187204 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-utilities\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.290164 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-catalog-content\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.290263 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-utilities\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.290420 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97n4t\" (UniqueName: \"kubernetes.io/projected/366ca468-074f-48ea-a339-8b033f899eb2-kube-api-access-97n4t\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.290651 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-catalog-content\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.290883 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-utilities\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.333526 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97n4t\" (UniqueName: \"kubernetes.io/projected/366ca468-074f-48ea-a339-8b033f899eb2-kube-api-access-97n4t\") pod \"community-operators-j86xc\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.377362 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:38 crc kubenswrapper[4970]: I1209 13:29:38.973593 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j86xc"] Dec 09 13:29:39 crc kubenswrapper[4970]: I1209 13:29:39.281747 4970 generic.go:334] "Generic (PLEG): container finished" podID="366ca468-074f-48ea-a339-8b033f899eb2" containerID="92599d0ed314702437c9148095a562142522742df76b1f6b774e2377368faba6" exitCode=0 Dec 09 13:29:39 crc kubenswrapper[4970]: I1209 13:29:39.281947 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerDied","Data":"92599d0ed314702437c9148095a562142522742df76b1f6b774e2377368faba6"} Dec 09 13:29:39 crc kubenswrapper[4970]: I1209 13:29:39.283095 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerStarted","Data":"b097c378b06ec108ad0fe2b4b3328f39cc15f202b6ec73f2abd3932b81b1ac11"} Dec 09 13:29:40 crc kubenswrapper[4970]: I1209 13:29:40.303426 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerStarted","Data":"3fe4f59446d944ec5f3d2907a85b7c95d6cf0fed4c0ae1c3c9fec3f1df03c342"} Dec 09 13:29:40 crc kubenswrapper[4970]: E1209 13:29:40.817079 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:29:42 crc kubenswrapper[4970]: I1209 13:29:42.352184 4970 generic.go:334] "Generic (PLEG): container finished" podID="366ca468-074f-48ea-a339-8b033f899eb2" containerID="3fe4f59446d944ec5f3d2907a85b7c95d6cf0fed4c0ae1c3c9fec3f1df03c342" exitCode=0 Dec 09 13:29:42 crc kubenswrapper[4970]: I1209 13:29:42.352280 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerDied","Data":"3fe4f59446d944ec5f3d2907a85b7c95d6cf0fed4c0ae1c3c9fec3f1df03c342"} Dec 09 13:29:43 crc kubenswrapper[4970]: E1209 13:29:43.910816 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:29:43 crc kubenswrapper[4970]: E1209 13:29:43.911612 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:29:43 crc kubenswrapper[4970]: E1209 13:29:43.911800 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:29:43 crc kubenswrapper[4970]: E1209 13:29:43.913152 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:29:44 crc kubenswrapper[4970]: I1209 13:29:44.399405 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerStarted","Data":"cd2b1a1e0409a1efe10425f247e0c843045c609ee69455544b0360a3fe31af2c"} Dec 09 13:29:44 crc kubenswrapper[4970]: I1209 13:29:44.432627 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j86xc" podStartSLOduration=3.469884651 podStartE2EDuration="7.432605713s" podCreationTimestamp="2025-12-09 13:29:37 +0000 UTC" firstStartedPulling="2025-12-09 13:29:39.283763571 +0000 UTC m=+4991.844244622" lastFinishedPulling="2025-12-09 13:29:43.246484603 +0000 UTC m=+4995.806965684" observedRunningTime="2025-12-09 13:29:44.417690302 +0000 UTC m=+4996.978171363" watchObservedRunningTime="2025-12-09 13:29:44.432605713 +0000 UTC m=+4996.993086764" Dec 09 13:29:46 crc kubenswrapper[4970]: I1209 13:29:46.010772 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:29:46 crc kubenswrapper[4970]: I1209 13:29:46.011436 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:29:48 crc kubenswrapper[4970]: I1209 13:29:48.377501 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:48 crc kubenswrapper[4970]: I1209 13:29:48.377832 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:48 crc kubenswrapper[4970]: I1209 13:29:48.439588 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:48 crc kubenswrapper[4970]: I1209 13:29:48.538190 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:51 crc kubenswrapper[4970]: I1209 13:29:51.991219 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j86xc"] Dec 09 13:29:51 crc kubenswrapper[4970]: I1209 13:29:51.992088 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j86xc" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="registry-server" containerID="cri-o://cd2b1a1e0409a1efe10425f247e0c843045c609ee69455544b0360a3fe31af2c" gracePeriod=2 Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.514614 4970 generic.go:334] "Generic (PLEG): container finished" podID="366ca468-074f-48ea-a339-8b033f899eb2" containerID="cd2b1a1e0409a1efe10425f247e0c843045c609ee69455544b0360a3fe31af2c" exitCode=0 Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.514700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerDied","Data":"cd2b1a1e0409a1efe10425f247e0c843045c609ee69455544b0360a3fe31af2c"} Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.515577 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j86xc" event={"ID":"366ca468-074f-48ea-a339-8b033f899eb2","Type":"ContainerDied","Data":"b097c378b06ec108ad0fe2b4b3328f39cc15f202b6ec73f2abd3932b81b1ac11"} Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.515641 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b097c378b06ec108ad0fe2b4b3328f39cc15f202b6ec73f2abd3932b81b1ac11" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.548752 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.676448 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-catalog-content\") pod \"366ca468-074f-48ea-a339-8b033f899eb2\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.676710 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-utilities\") pod \"366ca468-074f-48ea-a339-8b033f899eb2\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.676885 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97n4t\" (UniqueName: \"kubernetes.io/projected/366ca468-074f-48ea-a339-8b033f899eb2-kube-api-access-97n4t\") pod \"366ca468-074f-48ea-a339-8b033f899eb2\" (UID: \"366ca468-074f-48ea-a339-8b033f899eb2\") " Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.677520 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-utilities" (OuterVolumeSpecName: "utilities") pod "366ca468-074f-48ea-a339-8b033f899eb2" (UID: "366ca468-074f-48ea-a339-8b033f899eb2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.677810 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.690864 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366ca468-074f-48ea-a339-8b033f899eb2-kube-api-access-97n4t" (OuterVolumeSpecName: "kube-api-access-97n4t") pod "366ca468-074f-48ea-a339-8b033f899eb2" (UID: "366ca468-074f-48ea-a339-8b033f899eb2"). InnerVolumeSpecName "kube-api-access-97n4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.723740 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "366ca468-074f-48ea-a339-8b033f899eb2" (UID: "366ca468-074f-48ea-a339-8b033f899eb2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.780193 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97n4t\" (UniqueName: \"kubernetes.io/projected/366ca468-074f-48ea-a339-8b033f899eb2-kube-api-access-97n4t\") on node \"crc\" DevicePath \"\"" Dec 09 13:29:52 crc kubenswrapper[4970]: I1209 13:29:52.780233 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/366ca468-074f-48ea-a339-8b033f899eb2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:29:53 crc kubenswrapper[4970]: I1209 13:29:53.525910 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j86xc" Dec 09 13:29:53 crc kubenswrapper[4970]: I1209 13:29:53.586861 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j86xc"] Dec 09 13:29:53 crc kubenswrapper[4970]: I1209 13:29:53.606000 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j86xc"] Dec 09 13:29:53 crc kubenswrapper[4970]: I1209 13:29:53.827998 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="366ca468-074f-48ea-a339-8b033f899eb2" path="/var/lib/kubelet/pods/366ca468-074f-48ea-a339-8b033f899eb2/volumes" Dec 09 13:29:54 crc kubenswrapper[4970]: E1209 13:29:54.818701 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:29:56 crc kubenswrapper[4970]: E1209 13:29:56.815582 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.170071 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g"] Dec 09 13:30:00 crc kubenswrapper[4970]: E1209 13:30:00.171501 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="extract-content" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.171525 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="extract-content" Dec 09 13:30:00 crc kubenswrapper[4970]: E1209 13:30:00.171584 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="extract-utilities" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.171596 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="extract-utilities" Dec 09 13:30:00 crc kubenswrapper[4970]: E1209 13:30:00.171635 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="registry-server" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.171647 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="registry-server" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.172029 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="366ca468-074f-48ea-a339-8b033f899eb2" containerName="registry-server" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.173233 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.182386 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.182469 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.189097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khsjt\" (UniqueName: \"kubernetes.io/projected/bd7438f9-155c-4521-8852-07babd109407-kube-api-access-khsjt\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.189127 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g"] Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.189210 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7438f9-155c-4521-8852-07babd109407-config-volume\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.189265 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7438f9-155c-4521-8852-07babd109407-secret-volume\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.292017 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khsjt\" (UniqueName: \"kubernetes.io/projected/bd7438f9-155c-4521-8852-07babd109407-kube-api-access-khsjt\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.292122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7438f9-155c-4521-8852-07babd109407-config-volume\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.292152 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7438f9-155c-4521-8852-07babd109407-secret-volume\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.294729 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7438f9-155c-4521-8852-07babd109407-config-volume\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.300504 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7438f9-155c-4521-8852-07babd109407-secret-volume\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.313545 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khsjt\" (UniqueName: \"kubernetes.io/projected/bd7438f9-155c-4521-8852-07babd109407-kube-api-access-khsjt\") pod \"collect-profiles-29421450-8r76g\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:00 crc kubenswrapper[4970]: I1209 13:30:00.519862 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:01 crc kubenswrapper[4970]: I1209 13:30:01.002759 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g"] Dec 09 13:30:01 crc kubenswrapper[4970]: I1209 13:30:01.640348 4970 generic.go:334] "Generic (PLEG): container finished" podID="bd7438f9-155c-4521-8852-07babd109407" containerID="f1c8c7abc2d7bfbe0aad3a8e5f4c0c920f9d66bea511ae80479fa4a06e0602a1" exitCode=0 Dec 09 13:30:01 crc kubenswrapper[4970]: I1209 13:30:01.640605 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" event={"ID":"bd7438f9-155c-4521-8852-07babd109407","Type":"ContainerDied","Data":"f1c8c7abc2d7bfbe0aad3a8e5f4c0c920f9d66bea511ae80479fa4a06e0602a1"} Dec 09 13:30:01 crc kubenswrapper[4970]: I1209 13:30:01.640629 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" event={"ID":"bd7438f9-155c-4521-8852-07babd109407","Type":"ContainerStarted","Data":"cf904c15f156f917618e8882f51e40ef8fb3a20c2ee0bbacf47b63894ea15eec"} Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.083684 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.174293 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7438f9-155c-4521-8852-07babd109407-secret-volume\") pod \"bd7438f9-155c-4521-8852-07babd109407\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.174486 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7438f9-155c-4521-8852-07babd109407-config-volume\") pod \"bd7438f9-155c-4521-8852-07babd109407\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.174711 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khsjt\" (UniqueName: \"kubernetes.io/projected/bd7438f9-155c-4521-8852-07babd109407-kube-api-access-khsjt\") pod \"bd7438f9-155c-4521-8852-07babd109407\" (UID: \"bd7438f9-155c-4521-8852-07babd109407\") " Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.175384 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7438f9-155c-4521-8852-07babd109407-config-volume" (OuterVolumeSpecName: "config-volume") pod "bd7438f9-155c-4521-8852-07babd109407" (UID: "bd7438f9-155c-4521-8852-07babd109407"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.175686 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd7438f9-155c-4521-8852-07babd109407-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.184614 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7438f9-155c-4521-8852-07babd109407-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bd7438f9-155c-4521-8852-07babd109407" (UID: "bd7438f9-155c-4521-8852-07babd109407"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.186169 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7438f9-155c-4521-8852-07babd109407-kube-api-access-khsjt" (OuterVolumeSpecName: "kube-api-access-khsjt") pod "bd7438f9-155c-4521-8852-07babd109407" (UID: "bd7438f9-155c-4521-8852-07babd109407"). InnerVolumeSpecName "kube-api-access-khsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.284463 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khsjt\" (UniqueName: \"kubernetes.io/projected/bd7438f9-155c-4521-8852-07babd109407-kube-api-access-khsjt\") on node \"crc\" DevicePath \"\"" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.284499 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bd7438f9-155c-4521-8852-07babd109407-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.669509 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" event={"ID":"bd7438f9-155c-4521-8852-07babd109407","Type":"ContainerDied","Data":"cf904c15f156f917618e8882f51e40ef8fb3a20c2ee0bbacf47b63894ea15eec"} Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.669547 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421450-8r76g" Dec 09 13:30:03 crc kubenswrapper[4970]: I1209 13:30:03.669558 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf904c15f156f917618e8882f51e40ef8fb3a20c2ee0bbacf47b63894ea15eec" Dec 09 13:30:04 crc kubenswrapper[4970]: I1209 13:30:04.171379 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4"] Dec 09 13:30:04 crc kubenswrapper[4970]: I1209 13:30:04.181017 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421405-t66z4"] Dec 09 13:30:05 crc kubenswrapper[4970]: I1209 13:30:05.824777 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d22e9d18-c6b7-4083-8870-0aec294dc268" path="/var/lib/kubelet/pods/d22e9d18-c6b7-4083-8870-0aec294dc268/volumes" Dec 09 13:30:08 crc kubenswrapper[4970]: E1209 13:30:08.816182 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:30:11 crc kubenswrapper[4970]: E1209 13:30:11.815616 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.011153 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.012748 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.013029 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.014319 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.014510 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" gracePeriod=600 Dec 09 13:30:16 crc kubenswrapper[4970]: E1209 13:30:16.666577 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.832063 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" exitCode=0 Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.832124 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd"} Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.832178 4970 scope.go:117] "RemoveContainer" containerID="1881639a78781bcbfa52ff00bf5db8c4a9ec0f40874f26ed2176e2197d3b24e2" Dec 09 13:30:16 crc kubenswrapper[4970]: I1209 13:30:16.835382 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:30:16 crc kubenswrapper[4970]: E1209 13:30:16.836476 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:30:21 crc kubenswrapper[4970]: E1209 13:30:21.822380 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:30:26 crc kubenswrapper[4970]: E1209 13:30:26.816064 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:30:29 crc kubenswrapper[4970]: I1209 13:30:29.812600 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:30:29 crc kubenswrapper[4970]: E1209 13:30:29.813432 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:30:35 crc kubenswrapper[4970]: E1209 13:30:35.816452 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:30:41 crc kubenswrapper[4970]: I1209 13:30:41.813708 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:30:41 crc kubenswrapper[4970]: E1209 13:30:41.814874 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:30:41 crc kubenswrapper[4970]: E1209 13:30:41.820520 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:30:43 crc kubenswrapper[4970]: I1209 13:30:43.319850 4970 scope.go:117] "RemoveContainer" containerID="fe28c1cc2b7fc561a6d1b1d2650c6012e27cbc15b8c0cbdc8d1b783bba10ea06" Dec 09 13:30:47 crc kubenswrapper[4970]: E1209 13:30:47.829944 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:30:54 crc kubenswrapper[4970]: E1209 13:30:54.822477 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:30:55 crc kubenswrapper[4970]: I1209 13:30:55.815227 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:30:55 crc kubenswrapper[4970]: E1209 13:30:55.816504 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:31:00 crc kubenswrapper[4970]: E1209 13:31:00.816371 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:31:00 crc kubenswrapper[4970]: I1209 13:31:00.981221 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tbnf9"] Dec 09 13:31:00 crc kubenswrapper[4970]: E1209 13:31:00.981792 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7438f9-155c-4521-8852-07babd109407" containerName="collect-profiles" Dec 09 13:31:00 crc kubenswrapper[4970]: I1209 13:31:00.981812 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7438f9-155c-4521-8852-07babd109407" containerName="collect-profiles" Dec 09 13:31:00 crc kubenswrapper[4970]: I1209 13:31:00.982079 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7438f9-155c-4521-8852-07babd109407" containerName="collect-profiles" Dec 09 13:31:00 crc kubenswrapper[4970]: I1209 13:31:00.983797 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.004180 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbnf9"] Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.130641 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-catalog-content\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.131200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw9s8\" (UniqueName: \"kubernetes.io/projected/8ef5741d-52b7-47f3-987c-d452c82231a9-kube-api-access-vw9s8\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.131612 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-utilities\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.234053 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-utilities\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.234214 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-catalog-content\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.234413 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw9s8\" (UniqueName: \"kubernetes.io/projected/8ef5741d-52b7-47f3-987c-d452c82231a9-kube-api-access-vw9s8\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.234764 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-utilities\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.235161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-catalog-content\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.258470 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw9s8\" (UniqueName: \"kubernetes.io/projected/8ef5741d-52b7-47f3-987c-d452c82231a9-kube-api-access-vw9s8\") pod \"redhat-marketplace-tbnf9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.316714 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:01 crc kubenswrapper[4970]: I1209 13:31:01.869104 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbnf9"] Dec 09 13:31:02 crc kubenswrapper[4970]: I1209 13:31:02.342578 4970 generic.go:334] "Generic (PLEG): container finished" podID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerID="ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84" exitCode=0 Dec 09 13:31:02 crc kubenswrapper[4970]: I1209 13:31:02.342623 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerDied","Data":"ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84"} Dec 09 13:31:02 crc kubenswrapper[4970]: I1209 13:31:02.342820 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerStarted","Data":"e09c1a4f96d31987968572b13a9f7da43e37618d27abbff35c6c2dc2844fc454"} Dec 09 13:31:03 crc kubenswrapper[4970]: I1209 13:31:03.352552 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerStarted","Data":"9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5"} Dec 09 13:31:04 crc kubenswrapper[4970]: I1209 13:31:04.366621 4970 generic.go:334] "Generic (PLEG): container finished" podID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerID="9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5" exitCode=0 Dec 09 13:31:04 crc kubenswrapper[4970]: I1209 13:31:04.366724 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerDied","Data":"9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5"} Dec 09 13:31:05 crc kubenswrapper[4970]: I1209 13:31:05.381179 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerStarted","Data":"3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee"} Dec 09 13:31:05 crc kubenswrapper[4970]: I1209 13:31:05.404682 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tbnf9" podStartSLOduration=2.810562246 podStartE2EDuration="5.404659846s" podCreationTimestamp="2025-12-09 13:31:00 +0000 UTC" firstStartedPulling="2025-12-09 13:31:02.346681591 +0000 UTC m=+5074.907162642" lastFinishedPulling="2025-12-09 13:31:04.940779191 +0000 UTC m=+5077.501260242" observedRunningTime="2025-12-09 13:31:05.403000421 +0000 UTC m=+5077.963481472" watchObservedRunningTime="2025-12-09 13:31:05.404659846 +0000 UTC m=+5077.965140897" Dec 09 13:31:09 crc kubenswrapper[4970]: E1209 13:31:09.816574 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:31:10 crc kubenswrapper[4970]: I1209 13:31:10.814001 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:31:10 crc kubenswrapper[4970]: E1209 13:31:10.814880 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:31:11 crc kubenswrapper[4970]: I1209 13:31:11.317443 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:11 crc kubenswrapper[4970]: I1209 13:31:11.317833 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:11 crc kubenswrapper[4970]: I1209 13:31:11.430028 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:11 crc kubenswrapper[4970]: I1209 13:31:11.548677 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:11 crc kubenswrapper[4970]: I1209 13:31:11.685573 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbnf9"] Dec 09 13:31:13 crc kubenswrapper[4970]: I1209 13:31:13.467861 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tbnf9" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="registry-server" containerID="cri-o://3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee" gracePeriod=2 Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.133069 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.301132 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-utilities\") pod \"8ef5741d-52b7-47f3-987c-d452c82231a9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.301502 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw9s8\" (UniqueName: \"kubernetes.io/projected/8ef5741d-52b7-47f3-987c-d452c82231a9-kube-api-access-vw9s8\") pod \"8ef5741d-52b7-47f3-987c-d452c82231a9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.301580 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-catalog-content\") pod \"8ef5741d-52b7-47f3-987c-d452c82231a9\" (UID: \"8ef5741d-52b7-47f3-987c-d452c82231a9\") " Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.303200 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-utilities" (OuterVolumeSpecName: "utilities") pod "8ef5741d-52b7-47f3-987c-d452c82231a9" (UID: "8ef5741d-52b7-47f3-987c-d452c82231a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.310383 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ef5741d-52b7-47f3-987c-d452c82231a9-kube-api-access-vw9s8" (OuterVolumeSpecName: "kube-api-access-vw9s8") pod "8ef5741d-52b7-47f3-987c-d452c82231a9" (UID: "8ef5741d-52b7-47f3-987c-d452c82231a9"). InnerVolumeSpecName "kube-api-access-vw9s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.340584 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ef5741d-52b7-47f3-987c-d452c82231a9" (UID: "8ef5741d-52b7-47f3-987c-d452c82231a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.405815 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw9s8\" (UniqueName: \"kubernetes.io/projected/8ef5741d-52b7-47f3-987c-d452c82231a9-kube-api-access-vw9s8\") on node \"crc\" DevicePath \"\"" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.405888 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.405915 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ef5741d-52b7-47f3-987c-d452c82231a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.482640 4970 generic.go:334] "Generic (PLEG): container finished" podID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerID="3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee" exitCode=0 Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.482707 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tbnf9" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.482706 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerDied","Data":"3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee"} Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.482967 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tbnf9" event={"ID":"8ef5741d-52b7-47f3-987c-d452c82231a9","Type":"ContainerDied","Data":"e09c1a4f96d31987968572b13a9f7da43e37618d27abbff35c6c2dc2844fc454"} Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.483011 4970 scope.go:117] "RemoveContainer" containerID="3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.509480 4970 scope.go:117] "RemoveContainer" containerID="9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.546735 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbnf9"] Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.556924 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tbnf9"] Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.564378 4970 scope.go:117] "RemoveContainer" containerID="ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.621342 4970 scope.go:117] "RemoveContainer" containerID="3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee" Dec 09 13:31:14 crc kubenswrapper[4970]: E1209 13:31:14.621794 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee\": container with ID starting with 3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee not found: ID does not exist" containerID="3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.621836 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee"} err="failed to get container status \"3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee\": rpc error: code = NotFound desc = could not find container \"3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee\": container with ID starting with 3ac82fa2b7fcec28c81a89acf1fd91fcc92ec2a23743577bf63509e30e54fbee not found: ID does not exist" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.621863 4970 scope.go:117] "RemoveContainer" containerID="9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5" Dec 09 13:31:14 crc kubenswrapper[4970]: E1209 13:31:14.622203 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5\": container with ID starting with 9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5 not found: ID does not exist" containerID="9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.622240 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5"} err="failed to get container status \"9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5\": rpc error: code = NotFound desc = could not find container \"9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5\": container with ID starting with 9422901bce67c60db655225526a5c8ad4fff9700f3fcd220dd5bfd8bffa3c7e5 not found: ID does not exist" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.622281 4970 scope.go:117] "RemoveContainer" containerID="ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84" Dec 09 13:31:14 crc kubenswrapper[4970]: E1209 13:31:14.622617 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84\": container with ID starting with ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84 not found: ID does not exist" containerID="ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84" Dec 09 13:31:14 crc kubenswrapper[4970]: I1209 13:31:14.622642 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84"} err="failed to get container status \"ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84\": rpc error: code = NotFound desc = could not find container \"ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84\": container with ID starting with ac37016263c1ecbe6fb2ff95c847c93ace63af485e68c7bfd541b0eca197ec84 not found: ID does not exist" Dec 09 13:31:14 crc kubenswrapper[4970]: E1209 13:31:14.813709 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:31:15 crc kubenswrapper[4970]: I1209 13:31:15.833371 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" path="/var/lib/kubelet/pods/8ef5741d-52b7-47f3-987c-d452c82231a9/volumes" Dec 09 13:31:22 crc kubenswrapper[4970]: E1209 13:31:22.816425 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:31:24 crc kubenswrapper[4970]: I1209 13:31:24.813406 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:31:24 crc kubenswrapper[4970]: E1209 13:31:24.814077 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:31:26 crc kubenswrapper[4970]: E1209 13:31:26.816735 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:31:36 crc kubenswrapper[4970]: I1209 13:31:36.812916 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:31:36 crc kubenswrapper[4970]: E1209 13:31:36.813786 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:31:36 crc kubenswrapper[4970]: E1209 13:31:36.818651 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:31:38 crc kubenswrapper[4970]: E1209 13:31:38.816941 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:31:48 crc kubenswrapper[4970]: I1209 13:31:48.813108 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:31:48 crc kubenswrapper[4970]: E1209 13:31:48.814384 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:31:51 crc kubenswrapper[4970]: E1209 13:31:51.816834 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:31:52 crc kubenswrapper[4970]: E1209 13:31:52.815787 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:31:59 crc kubenswrapper[4970]: I1209 13:31:59.813279 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:31:59 crc kubenswrapper[4970]: E1209 13:31:59.814595 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:32:05 crc kubenswrapper[4970]: E1209 13:32:05.816744 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:32:06 crc kubenswrapper[4970]: E1209 13:32:06.814995 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:32:10 crc kubenswrapper[4970]: I1209 13:32:10.813330 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:32:10 crc kubenswrapper[4970]: E1209 13:32:10.814175 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:32:16 crc kubenswrapper[4970]: E1209 13:32:16.817875 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:32:19 crc kubenswrapper[4970]: E1209 13:32:19.817132 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:32:22 crc kubenswrapper[4970]: I1209 13:32:22.813119 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:32:22 crc kubenswrapper[4970]: E1209 13:32:22.814048 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:32:28 crc kubenswrapper[4970]: E1209 13:32:28.815669 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:32:34 crc kubenswrapper[4970]: E1209 13:32:34.818181 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:32:36 crc kubenswrapper[4970]: I1209 13:32:36.814014 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:32:36 crc kubenswrapper[4970]: E1209 13:32:36.815022 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:32:43 crc kubenswrapper[4970]: E1209 13:32:43.818505 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:32:47 crc kubenswrapper[4970]: E1209 13:32:47.825024 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:32:48 crc kubenswrapper[4970]: I1209 13:32:48.812998 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:32:48 crc kubenswrapper[4970]: E1209 13:32:48.813510 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:32:56 crc kubenswrapper[4970]: E1209 13:32:56.817885 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:32:59 crc kubenswrapper[4970]: I1209 13:32:59.812741 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:32:59 crc kubenswrapper[4970]: E1209 13:32:59.813550 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:32:59 crc kubenswrapper[4970]: E1209 13:32:59.817417 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:33:11 crc kubenswrapper[4970]: I1209 13:33:11.814090 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:33:11 crc kubenswrapper[4970]: E1209 13:33:11.816763 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:33:11 crc kubenswrapper[4970]: E1209 13:33:11.819156 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:33:13 crc kubenswrapper[4970]: E1209 13:33:13.814427 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.218588 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cfjzl/must-gather-mts4w"] Dec 09 13:33:18 crc kubenswrapper[4970]: E1209 13:33:18.219468 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="registry-server" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.219480 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="registry-server" Dec 09 13:33:18 crc kubenswrapper[4970]: E1209 13:33:18.219499 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="extract-content" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.219505 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="extract-content" Dec 09 13:33:18 crc kubenswrapper[4970]: E1209 13:33:18.219518 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="extract-utilities" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.219524 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="extract-utilities" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.219736 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ef5741d-52b7-47f3-987c-d452c82231a9" containerName="registry-server" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.220919 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.224084 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cfjzl"/"default-dockercfg-rjn54" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.224703 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cfjzl"/"openshift-service-ca.crt" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.226692 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cfjzl"/"kube-root-ca.crt" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.240616 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cfjzl/must-gather-mts4w"] Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.390115 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cb7n\" (UniqueName: \"kubernetes.io/projected/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-kube-api-access-7cb7n\") pod \"must-gather-mts4w\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.390207 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-must-gather-output\") pod \"must-gather-mts4w\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.493198 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cb7n\" (UniqueName: \"kubernetes.io/projected/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-kube-api-access-7cb7n\") pod \"must-gather-mts4w\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.493313 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-must-gather-output\") pod \"must-gather-mts4w\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.493813 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-must-gather-output\") pod \"must-gather-mts4w\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.511930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cb7n\" (UniqueName: \"kubernetes.io/projected/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-kube-api-access-7cb7n\") pod \"must-gather-mts4w\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:18 crc kubenswrapper[4970]: I1209 13:33:18.539237 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:33:19 crc kubenswrapper[4970]: I1209 13:33:19.034114 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cfjzl/must-gather-mts4w"] Dec 09 13:33:19 crc kubenswrapper[4970]: I1209 13:33:19.139307 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/must-gather-mts4w" event={"ID":"10b3359c-b6fa-40ba-bdbf-e374edd3a96a","Type":"ContainerStarted","Data":"a366df8e9f9589cc31e9b8b931d2de7ca6b600ec06d25dbdc5dba3bc04f2a208"} Dec 09 13:33:26 crc kubenswrapper[4970]: I1209 13:33:26.814407 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:33:26 crc kubenswrapper[4970]: E1209 13:33:26.815230 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:33:26 crc kubenswrapper[4970]: E1209 13:33:26.817211 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:33:27 crc kubenswrapper[4970]: E1209 13:33:27.851783 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:33:28 crc kubenswrapper[4970]: I1209 13:33:28.239718 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/must-gather-mts4w" event={"ID":"10b3359c-b6fa-40ba-bdbf-e374edd3a96a","Type":"ContainerStarted","Data":"b19bd12dd370f6889331f7d56bd7698339c463496b7802a061314e42bcc79c46"} Dec 09 13:33:28 crc kubenswrapper[4970]: I1209 13:33:28.240058 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/must-gather-mts4w" event={"ID":"10b3359c-b6fa-40ba-bdbf-e374edd3a96a","Type":"ContainerStarted","Data":"c8879283eab36ba7525e2fd2c4bc4fc3f517ca27c1765a360a9ad1c1c3565996"} Dec 09 13:33:28 crc kubenswrapper[4970]: I1209 13:33:28.262973 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cfjzl/must-gather-mts4w" podStartSLOduration=1.959192203 podStartE2EDuration="10.262954159s" podCreationTimestamp="2025-12-09 13:33:18 +0000 UTC" firstStartedPulling="2025-12-09 13:33:19.045208391 +0000 UTC m=+5211.605689442" lastFinishedPulling="2025-12-09 13:33:27.348970307 +0000 UTC m=+5219.909451398" observedRunningTime="2025-12-09 13:33:28.254605856 +0000 UTC m=+5220.815086917" watchObservedRunningTime="2025-12-09 13:33:28.262954159 +0000 UTC m=+5220.823435210" Dec 09 13:33:31 crc kubenswrapper[4970]: I1209 13:33:31.273605 4970 generic.go:334] "Generic (PLEG): container finished" podID="0e9c3ade-02cb-4788-a63c-78e036ff9ede" containerID="59fb0996959ebd0ae93fa94201e3615e6aa8f451fd8b2d9013fe4ccc768b9af6" exitCode=2 Dec 09 13:33:31 crc kubenswrapper[4970]: I1209 13:33:31.273683 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" event={"ID":"0e9c3ade-02cb-4788-a63c-78e036ff9ede","Type":"ContainerDied","Data":"59fb0996959ebd0ae93fa94201e3615e6aa8f451fd8b2d9013fe4ccc768b9af6"} Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.307164 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cfjzl/crc-debug-rdtfx"] Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.309110 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.372826 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95kt\" (UniqueName: \"kubernetes.io/projected/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-kube-api-access-n95kt\") pod \"crc-debug-rdtfx\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.373379 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-host\") pod \"crc-debug-rdtfx\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.475122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-host\") pod \"crc-debug-rdtfx\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.475302 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-host\") pod \"crc-debug-rdtfx\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.475498 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95kt\" (UniqueName: \"kubernetes.io/projected/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-kube-api-access-n95kt\") pod \"crc-debug-rdtfx\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.504401 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95kt\" (UniqueName: \"kubernetes.io/projected/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-kube-api-access-n95kt\") pod \"crc-debug-rdtfx\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.632110 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.826301 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.884131 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-inventory\") pod \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.884229 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-ssh-key\") pod \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.884447 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqc55\" (UniqueName: \"kubernetes.io/projected/0e9c3ade-02cb-4788-a63c-78e036ff9ede-kube-api-access-dqc55\") pod \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\" (UID: \"0e9c3ade-02cb-4788-a63c-78e036ff9ede\") " Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.892507 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9c3ade-02cb-4788-a63c-78e036ff9ede-kube-api-access-dqc55" (OuterVolumeSpecName: "kube-api-access-dqc55") pod "0e9c3ade-02cb-4788-a63c-78e036ff9ede" (UID: "0e9c3ade-02cb-4788-a63c-78e036ff9ede"). InnerVolumeSpecName "kube-api-access-dqc55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.916221 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-inventory" (OuterVolumeSpecName: "inventory") pod "0e9c3ade-02cb-4788-a63c-78e036ff9ede" (UID: "0e9c3ade-02cb-4788-a63c-78e036ff9ede"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.927119 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e9c3ade-02cb-4788-a63c-78e036ff9ede" (UID: "0e9c3ade-02cb-4788-a63c-78e036ff9ede"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.988216 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqc55\" (UniqueName: \"kubernetes.io/projected/0e9c3ade-02cb-4788-a63c-78e036ff9ede-kube-api-access-dqc55\") on node \"crc\" DevicePath \"\"" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.988264 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 13:33:32 crc kubenswrapper[4970]: I1209 13:33:32.988274 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e9c3ade-02cb-4788-a63c-78e036ff9ede-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 13:33:33 crc kubenswrapper[4970]: I1209 13:33:33.296198 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" event={"ID":"bfda9acd-625f-4a4a-a6ae-3605f5033ca4","Type":"ContainerStarted","Data":"544fa6fc1d07ede3fda8a552998923a7bb81c2c90297dfb3756ac49001137417"} Dec 09 13:33:33 crc kubenswrapper[4970]: I1209 13:33:33.298526 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" event={"ID":"0e9c3ade-02cb-4788-a63c-78e036ff9ede","Type":"ContainerDied","Data":"e82cc0b782ac527dfd144edf5ba82545114574bf142af3d798624788c3f9b7b4"} Dec 09 13:33:33 crc kubenswrapper[4970]: I1209 13:33:33.298578 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd" Dec 09 13:33:33 crc kubenswrapper[4970]: I1209 13:33:33.298589 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e82cc0b782ac527dfd144edf5ba82545114574bf142af3d798624788c3f9b7b4" Dec 09 13:33:37 crc kubenswrapper[4970]: E1209 13:33:37.824288 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:33:38 crc kubenswrapper[4970]: I1209 13:33:38.813130 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:33:38 crc kubenswrapper[4970]: E1209 13:33:38.813748 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:33:41 crc kubenswrapper[4970]: E1209 13:33:41.188532 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:33:47 crc kubenswrapper[4970]: I1209 13:33:47.509084 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" event={"ID":"bfda9acd-625f-4a4a-a6ae-3605f5033ca4","Type":"ContainerStarted","Data":"ab50b851fda53c32e00074f97762414a29797bb61035db0c07a8d100fa46c05f"} Dec 09 13:33:47 crc kubenswrapper[4970]: I1209 13:33:47.529794 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" podStartSLOduration=2.185989401 podStartE2EDuration="15.529780303s" podCreationTimestamp="2025-12-09 13:33:32 +0000 UTC" firstStartedPulling="2025-12-09 13:33:32.704540204 +0000 UTC m=+5225.265021265" lastFinishedPulling="2025-12-09 13:33:46.048331116 +0000 UTC m=+5238.608812167" observedRunningTime="2025-12-09 13:33:47.526115915 +0000 UTC m=+5240.086596996" watchObservedRunningTime="2025-12-09 13:33:47.529780303 +0000 UTC m=+5240.090261354" Dec 09 13:33:49 crc kubenswrapper[4970]: E1209 13:33:49.815228 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:33:52 crc kubenswrapper[4970]: I1209 13:33:52.813126 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:33:52 crc kubenswrapper[4970]: E1209 13:33:52.813941 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:33:53 crc kubenswrapper[4970]: E1209 13:33:53.815168 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:02 crc kubenswrapper[4970]: E1209 13:34:02.568689 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:34:04 crc kubenswrapper[4970]: E1209 13:34:04.815685 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:07 crc kubenswrapper[4970]: I1209 13:34:07.823336 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:34:07 crc kubenswrapper[4970]: E1209 13:34:07.824480 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:34:13 crc kubenswrapper[4970]: I1209 13:34:13.815575 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:34:13 crc kubenswrapper[4970]: E1209 13:34:13.940595 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:34:13 crc kubenswrapper[4970]: E1209 13:34:13.940660 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:34:13 crc kubenswrapper[4970]: E1209 13:34:13.940795 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:34:13 crc kubenswrapper[4970]: E1209 13:34:13.941981 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:34:15 crc kubenswrapper[4970]: E1209 13:34:15.814533 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:15 crc kubenswrapper[4970]: I1209 13:34:15.828167 4970 generic.go:334] "Generic (PLEG): container finished" podID="bfda9acd-625f-4a4a-a6ae-3605f5033ca4" containerID="ab50b851fda53c32e00074f97762414a29797bb61035db0c07a8d100fa46c05f" exitCode=0 Dec 09 13:34:15 crc kubenswrapper[4970]: I1209 13:34:15.828210 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" event={"ID":"bfda9acd-625f-4a4a-a6ae-3605f5033ca4","Type":"ContainerDied","Data":"ab50b851fda53c32e00074f97762414a29797bb61035db0c07a8d100fa46c05f"} Dec 09 13:34:16 crc kubenswrapper[4970]: I1209 13:34:16.997541 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.113005 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cfjzl/crc-debug-rdtfx"] Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.119001 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-host\") pod \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.119053 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95kt\" (UniqueName: \"kubernetes.io/projected/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-kube-api-access-n95kt\") pod \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\" (UID: \"bfda9acd-625f-4a4a-a6ae-3605f5033ca4\") " Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.119143 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-host" (OuterVolumeSpecName: "host") pod "bfda9acd-625f-4a4a-a6ae-3605f5033ca4" (UID: "bfda9acd-625f-4a4a-a6ae-3605f5033ca4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.119849 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-host\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.125496 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-kube-api-access-n95kt" (OuterVolumeSpecName: "kube-api-access-n95kt") pod "bfda9acd-625f-4a4a-a6ae-3605f5033ca4" (UID: "bfda9acd-625f-4a4a-a6ae-3605f5033ca4"). InnerVolumeSpecName "kube-api-access-n95kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.130956 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cfjzl/crc-debug-rdtfx"] Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.221769 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95kt\" (UniqueName: \"kubernetes.io/projected/bfda9acd-625f-4a4a-a6ae-3605f5033ca4-kube-api-access-n95kt\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.828718 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfda9acd-625f-4a4a-a6ae-3605f5033ca4" path="/var/lib/kubelet/pods/bfda9acd-625f-4a4a-a6ae-3605f5033ca4/volumes" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.850054 4970 scope.go:117] "RemoveContainer" containerID="ab50b851fda53c32e00074f97762414a29797bb61035db0c07a8d100fa46c05f" Dec 09 13:34:17 crc kubenswrapper[4970]: I1209 13:34:17.850156 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-rdtfx" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.336068 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cfjzl/crc-debug-clg9l"] Dec 09 13:34:18 crc kubenswrapper[4970]: E1209 13:34:18.336789 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9c3ade-02cb-4788-a63c-78e036ff9ede" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.336810 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9c3ade-02cb-4788-a63c-78e036ff9ede" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:34:18 crc kubenswrapper[4970]: E1209 13:34:18.336862 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfda9acd-625f-4a4a-a6ae-3605f5033ca4" containerName="container-00" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.336868 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfda9acd-625f-4a4a-a6ae-3605f5033ca4" containerName="container-00" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.337090 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9c3ade-02cb-4788-a63c-78e036ff9ede" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.337112 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfda9acd-625f-4a4a-a6ae-3605f5033ca4" containerName="container-00" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.337870 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.449861 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17e96e-67f9-4cd8-a524-c17738ed598d-host\") pod \"crc-debug-clg9l\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.450120 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkvdw\" (UniqueName: \"kubernetes.io/projected/8f17e96e-67f9-4cd8-a524-c17738ed598d-kube-api-access-fkvdw\") pod \"crc-debug-clg9l\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.552718 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17e96e-67f9-4cd8-a524-c17738ed598d-host\") pod \"crc-debug-clg9l\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.552865 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkvdw\" (UniqueName: \"kubernetes.io/projected/8f17e96e-67f9-4cd8-a524-c17738ed598d-kube-api-access-fkvdw\") pod \"crc-debug-clg9l\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.552896 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17e96e-67f9-4cd8-a524-c17738ed598d-host\") pod \"crc-debug-clg9l\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.572507 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkvdw\" (UniqueName: \"kubernetes.io/projected/8f17e96e-67f9-4cd8-a524-c17738ed598d-kube-api-access-fkvdw\") pod \"crc-debug-clg9l\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.659028 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:18 crc kubenswrapper[4970]: W1209 13:34:18.702181 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f17e96e_67f9_4cd8_a524_c17738ed598d.slice/crio-8f7ec73b092a1ebc09e93346a6445ebfdf0cc3b84fda4501451382268fcd2c3a WatchSource:0}: Error finding container 8f7ec73b092a1ebc09e93346a6445ebfdf0cc3b84fda4501451382268fcd2c3a: Status 404 returned error can't find the container with id 8f7ec73b092a1ebc09e93346a6445ebfdf0cc3b84fda4501451382268fcd2c3a Dec 09 13:34:18 crc kubenswrapper[4970]: I1209 13:34:18.860072 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/crc-debug-clg9l" event={"ID":"8f17e96e-67f9-4cd8-a524-c17738ed598d","Type":"ContainerStarted","Data":"8f7ec73b092a1ebc09e93346a6445ebfdf0cc3b84fda4501451382268fcd2c3a"} Dec 09 13:34:19 crc kubenswrapper[4970]: I1209 13:34:19.877701 4970 generic.go:334] "Generic (PLEG): container finished" podID="8f17e96e-67f9-4cd8-a524-c17738ed598d" containerID="f992b81f365636e90a981d21f875b118690d73beb013ddee04257d7614fa9cc1" exitCode=1 Dec 09 13:34:19 crc kubenswrapper[4970]: I1209 13:34:19.878158 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/crc-debug-clg9l" event={"ID":"8f17e96e-67f9-4cd8-a524-c17738ed598d","Type":"ContainerDied","Data":"f992b81f365636e90a981d21f875b118690d73beb013ddee04257d7614fa9cc1"} Dec 09 13:34:19 crc kubenswrapper[4970]: I1209 13:34:19.926276 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cfjzl/crc-debug-clg9l"] Dec 09 13:34:19 crc kubenswrapper[4970]: I1209 13:34:19.938677 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cfjzl/crc-debug-clg9l"] Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.015377 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.124712 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17e96e-67f9-4cd8-a524-c17738ed598d-host\") pod \"8f17e96e-67f9-4cd8-a524-c17738ed598d\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.124892 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f17e96e-67f9-4cd8-a524-c17738ed598d-host" (OuterVolumeSpecName: "host") pod "8f17e96e-67f9-4cd8-a524-c17738ed598d" (UID: "8f17e96e-67f9-4cd8-a524-c17738ed598d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.124894 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkvdw\" (UniqueName: \"kubernetes.io/projected/8f17e96e-67f9-4cd8-a524-c17738ed598d-kube-api-access-fkvdw\") pod \"8f17e96e-67f9-4cd8-a524-c17738ed598d\" (UID: \"8f17e96e-67f9-4cd8-a524-c17738ed598d\") " Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.126151 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8f17e96e-67f9-4cd8-a524-c17738ed598d-host\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.130608 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f17e96e-67f9-4cd8-a524-c17738ed598d-kube-api-access-fkvdw" (OuterVolumeSpecName: "kube-api-access-fkvdw") pod "8f17e96e-67f9-4cd8-a524-c17738ed598d" (UID: "8f17e96e-67f9-4cd8-a524-c17738ed598d"). InnerVolumeSpecName "kube-api-access-fkvdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.229081 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkvdw\" (UniqueName: \"kubernetes.io/projected/8f17e96e-67f9-4cd8-a524-c17738ed598d-kube-api-access-fkvdw\") on node \"crc\" DevicePath \"\"" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.826893 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f17e96e-67f9-4cd8-a524-c17738ed598d" path="/var/lib/kubelet/pods/8f17e96e-67f9-4cd8-a524-c17738ed598d/volumes" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.895795 4970 scope.go:117] "RemoveContainer" containerID="f992b81f365636e90a981d21f875b118690d73beb013ddee04257d7614fa9cc1" Dec 09 13:34:21 crc kubenswrapper[4970]: I1209 13:34:21.895858 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/crc-debug-clg9l" Dec 09 13:34:22 crc kubenswrapper[4970]: I1209 13:34:22.813323 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:34:22 crc kubenswrapper[4970]: E1209 13:34:22.813892 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:34:26 crc kubenswrapper[4970]: E1209 13:34:26.813890 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:27 crc kubenswrapper[4970]: E1209 13:34:27.824771 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:34:33 crc kubenswrapper[4970]: I1209 13:34:33.813025 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:34:33 crc kubenswrapper[4970]: E1209 13:34:33.813748 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:34:37 crc kubenswrapper[4970]: E1209 13:34:37.824225 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:38 crc kubenswrapper[4970]: E1209 13:34:38.815478 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:34:44 crc kubenswrapper[4970]: I1209 13:34:44.812872 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:34:44 crc kubenswrapper[4970]: E1209 13:34:44.813677 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:34:48 crc kubenswrapper[4970]: E1209 13:34:48.939649 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:34:48 crc kubenswrapper[4970]: E1209 13:34:48.940143 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:34:48 crc kubenswrapper[4970]: E1209 13:34:48.940293 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:34:48 crc kubenswrapper[4970]: E1209 13:34:48.941804 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:53 crc kubenswrapper[4970]: E1209 13:34:53.814870 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.618663 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x94lr"] Dec 09 13:34:54 crc kubenswrapper[4970]: E1209 13:34:54.619863 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f17e96e-67f9-4cd8-a524-c17738ed598d" containerName="container-00" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.619949 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f17e96e-67f9-4cd8-a524-c17738ed598d" containerName="container-00" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.620236 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f17e96e-67f9-4cd8-a524-c17738ed598d" containerName="container-00" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.622114 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.636475 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x94lr"] Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.731868 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9gp8\" (UniqueName: \"kubernetes.io/projected/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-kube-api-access-q9gp8\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.731936 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-catalog-content\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.732220 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-utilities\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.834522 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9gp8\" (UniqueName: \"kubernetes.io/projected/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-kube-api-access-q9gp8\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.834610 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-catalog-content\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.834710 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-utilities\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.835138 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-utilities\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.835743 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-catalog-content\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.856963 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9gp8\" (UniqueName: \"kubernetes.io/projected/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-kube-api-access-q9gp8\") pod \"redhat-operators-x94lr\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:54 crc kubenswrapper[4970]: I1209 13:34:54.956347 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:34:55 crc kubenswrapper[4970]: I1209 13:34:55.512463 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x94lr"] Dec 09 13:34:56 crc kubenswrapper[4970]: I1209 13:34:56.322408 4970 generic.go:334] "Generic (PLEG): container finished" podID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerID="379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237" exitCode=0 Dec 09 13:34:56 crc kubenswrapper[4970]: I1209 13:34:56.322500 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerDied","Data":"379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237"} Dec 09 13:34:56 crc kubenswrapper[4970]: I1209 13:34:56.323459 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerStarted","Data":"d5d0f70b9447c2a1b92881dd7de0fd5ebc8b601ff99771f0ac73f4687bf09f43"} Dec 09 13:34:57 crc kubenswrapper[4970]: I1209 13:34:57.335853 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerStarted","Data":"b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a"} Dec 09 13:34:59 crc kubenswrapper[4970]: I1209 13:34:59.814432 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:34:59 crc kubenswrapper[4970]: E1209 13:34:59.815533 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:34:59 crc kubenswrapper[4970]: E1209 13:34:59.815760 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:35:00 crc kubenswrapper[4970]: I1209 13:35:00.373938 4970 generic.go:334] "Generic (PLEG): container finished" podID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerID="b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a" exitCode=0 Dec 09 13:35:00 crc kubenswrapper[4970]: I1209 13:35:00.373988 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerDied","Data":"b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a"} Dec 09 13:35:02 crc kubenswrapper[4970]: I1209 13:35:02.487585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerStarted","Data":"63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9"} Dec 09 13:35:02 crc kubenswrapper[4970]: I1209 13:35:02.514857 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x94lr" podStartSLOduration=3.664073503 podStartE2EDuration="8.514530301s" podCreationTimestamp="2025-12-09 13:34:54 +0000 UTC" firstStartedPulling="2025-12-09 13:34:56.324529992 +0000 UTC m=+5308.885011043" lastFinishedPulling="2025-12-09 13:35:01.17498679 +0000 UTC m=+5313.735467841" observedRunningTime="2025-12-09 13:35:02.51150783 +0000 UTC m=+5315.071988881" watchObservedRunningTime="2025-12-09 13:35:02.514530301 +0000 UTC m=+5315.075011352" Dec 09 13:35:04 crc kubenswrapper[4970]: E1209 13:35:04.815481 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:35:04 crc kubenswrapper[4970]: I1209 13:35:04.956495 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:35:04 crc kubenswrapper[4970]: I1209 13:35:04.956563 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:35:06 crc kubenswrapper[4970]: I1209 13:35:06.024580 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x94lr" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="registry-server" probeResult="failure" output=< Dec 09 13:35:06 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Dec 09 13:35:06 crc kubenswrapper[4970]: > Dec 09 13:35:10 crc kubenswrapper[4970]: I1209 13:35:10.812122 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:35:10 crc kubenswrapper[4970]: E1209 13:35:10.812969 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:35:10 crc kubenswrapper[4970]: E1209 13:35:10.815240 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.357739 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_316942aa-13a6-4e83-aff5-b4f54f43ef20/aodh-api/0.log" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.524559 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_316942aa-13a6-4e83-aff5-b4f54f43ef20/aodh-evaluator/0.log" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.585710 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_316942aa-13a6-4e83-aff5-b4f54f43ef20/aodh-listener/0.log" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.614151 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_316942aa-13a6-4e83-aff5-b4f54f43ef20/aodh-notifier/0.log" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.752991 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c8c76f86b-vkq6w_0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11/barbican-api/0.log" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.779808 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c8c76f86b-vkq6w_0eff8b5e-8cf6-4537-a4bd-1afda2b8bf11/barbican-api-log/0.log" Dec 09 13:35:13 crc kubenswrapper[4970]: I1209 13:35:13.907823 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b7745d654-pr4dm_4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a/barbican-keystone-listener/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.019505 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-b7745d654-pr4dm_4ccb0f39-43d9-4de1-b0ee-cc108e90ce1a/barbican-keystone-listener-log/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.112767 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b7b55ddff-q4sdr_21d879fa-8dd8-4177-88c4-63e6a5689826/barbican-worker-log/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.124894 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b7b55ddff-q4sdr_21d879fa-8dd8-4177-88c4-63e6a5689826/barbican-worker/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.269177 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-s7wgg_ee93e83b-cc64-4847-8245-0e5e002f9540/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.480537 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea52f6b9-599e-4ac5-94c6-79949c705be8/ceilometer-notification-agent/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.528473 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea52f6b9-599e-4ac5-94c6-79949c705be8/proxy-httpd/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.553001 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_ea52f6b9-599e-4ac5-94c6-79949c705be8/sg-core/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.722213 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_670bed16-1df0-4568-9305-886c7ec7a4f5/cinder-api-log/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.762775 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_670bed16-1df0-4568-9305-886c7ec7a4f5/cinder-api/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.864624 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d89950ef-c3f6-46ae-aa45-65baf0c0fe66/cinder-scheduler/0.log" Dec 09 13:35:14 crc kubenswrapper[4970]: I1209 13:35:14.936582 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_d89950ef-c3f6-46ae-aa45-65baf0c0fe66/probe/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.008356 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.080921 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.247976 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x94lr"] Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.271060 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-jwwtx_bac29da9-2775-4d04-8f3a-2f65bc11940e/init/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.423592 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-jwwtx_bac29da9-2775-4d04-8f3a-2f65bc11940e/init/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.471475 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-6f6df4f56c-jwwtx_bac29da9-2775-4d04-8f3a-2f65bc11940e/dnsmasq-dns/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.502627 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-84h7q_e86845e6-6bd5-4fb3-9a63-7c6f4d730644/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.731243 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-b72h6_ab4e637c-e74e-4e8b-9d81-98eadd755fc3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.771388 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nrdbk_711dabf9-95cd-423b-ad4b-2273f430d8f2/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:15 crc kubenswrapper[4970]: I1209 13:35:15.954787 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-pxgd7_dcbd4a48-8e44-4b01-b6fb-efeb8e35c03c/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.058214 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-rs6hg_b45bf430-1223-44e1-b791-212935f09b2a/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.269703 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-vvp2x_875c14b9-3ae4-43ed-b83f-78088737e656/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.308486 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xcrtd_0e9c3ade-02cb-4788-a63c-78e036ff9ede/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.527648 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_be307043-99ae-477a-8134-c8e971674ff3/glance-httpd/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.535102 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_be307043-99ae-477a-8134-c8e971674ff3/glance-log/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.684524 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c97aa48e-1a86-4909-8ec0-62d5599c18ed/glance-httpd/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.717328 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x94lr" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="registry-server" containerID="cri-o://63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9" gracePeriod=2 Dec 09 13:35:16 crc kubenswrapper[4970]: I1209 13:35:16.724106 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c97aa48e-1a86-4909-8ec0-62d5599c18ed/glance-log/0.log" Dec 09 13:35:16 crc kubenswrapper[4970]: E1209 13:35:16.814529 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.283983 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.310462 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6f965965cd-jt4tk_5b114785-502f-476f-a0d5-8ba13694acbc/heat-api/0.log" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.446085 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9gp8\" (UniqueName: \"kubernetes.io/projected/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-kube-api-access-q9gp8\") pod \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.446367 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-utilities\") pod \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.446425 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-catalog-content\") pod \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\" (UID: \"625c3635-ba7c-4b92-bd1e-9d2f293dc14c\") " Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.448303 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-utilities" (OuterVolumeSpecName: "utilities") pod "625c3635-ba7c-4b92-bd1e-9d2f293dc14c" (UID: "625c3635-ba7c-4b92-bd1e-9d2f293dc14c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.462473 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-kube-api-access-q9gp8" (OuterVolumeSpecName: "kube-api-access-q9gp8") pod "625c3635-ba7c-4b92-bd1e-9d2f293dc14c" (UID: "625c3635-ba7c-4b92-bd1e-9d2f293dc14c"). InnerVolumeSpecName "kube-api-access-q9gp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.548830 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9gp8\" (UniqueName: \"kubernetes.io/projected/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-kube-api-access-q9gp8\") on node \"crc\" DevicePath \"\"" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.550923 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.564751 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-6996656b77-zt25p_c25a77e0-9bbd-4ff2-b53c-ecb4712198b1/heat-engine/0.log" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.570433 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "625c3635-ba7c-4b92-bd1e-9d2f293dc14c" (UID: "625c3635-ba7c-4b92-bd1e-9d2f293dc14c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.596128 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-657b4c4594-7x65f_8051115f-1bf5-4043-b6b0-967d469c0d6a/heat-cfnapi/0.log" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.653125 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/625c3635-ba7c-4b92-bd1e-9d2f293dc14c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.727768 4970 generic.go:334] "Generic (PLEG): container finished" podID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerID="63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9" exitCode=0 Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.730178 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x94lr" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.730301 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerDied","Data":"63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9"} Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.730361 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x94lr" event={"ID":"625c3635-ba7c-4b92-bd1e-9d2f293dc14c","Type":"ContainerDied","Data":"d5d0f70b9447c2a1b92881dd7de0fd5ebc8b601ff99771f0ac73f4687bf09f43"} Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.730388 4970 scope.go:117] "RemoveContainer" containerID="63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.746162 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-565ff75dc9-922w2_c2aa972b-8e19-46ea-b7e5-8b302c81dc0a/keystone-api/0.log" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.777179 4970 scope.go:117] "RemoveContainer" containerID="b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.780170 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x94lr"] Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.796422 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x94lr"] Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.815396 4970 scope.go:117] "RemoveContainer" containerID="379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.828415 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421421-fgrd7_8a7e8b7d-c30f-41e8-bd88-7a6d9c79b7a8/keystone-cron/0.log" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.832541 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" path="/var/lib/kubelet/pods/625c3635-ba7c-4b92-bd1e-9d2f293dc14c/volumes" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.883708 4970 scope.go:117] "RemoveContainer" containerID="63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9" Dec 09 13:35:17 crc kubenswrapper[4970]: E1209 13:35:17.884871 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9\": container with ID starting with 63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9 not found: ID does not exist" containerID="63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.884904 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9"} err="failed to get container status \"63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9\": rpc error: code = NotFound desc = could not find container \"63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9\": container with ID starting with 63f06059ba67d150dc26091e25bb1bc454f93fac42b6f1521497fbd84120d6b9 not found: ID does not exist" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.884923 4970 scope.go:117] "RemoveContainer" containerID="b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a" Dec 09 13:35:17 crc kubenswrapper[4970]: E1209 13:35:17.885327 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a\": container with ID starting with b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a not found: ID does not exist" containerID="b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.885397 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a"} err="failed to get container status \"b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a\": rpc error: code = NotFound desc = could not find container \"b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a\": container with ID starting with b4360be62b89bea13be9121b74dc45cfbabfa032f7593c281d1b509992cef52a not found: ID does not exist" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.885425 4970 scope.go:117] "RemoveContainer" containerID="379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237" Dec 09 13:35:17 crc kubenswrapper[4970]: E1209 13:35:17.885896 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237\": container with ID starting with 379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237 not found: ID does not exist" containerID="379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.885921 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237"} err="failed to get container status \"379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237\": rpc error: code = NotFound desc = could not find container \"379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237\": container with ID starting with 379b47c30853b0296bc939b005139ecd9474b3e47ab3078946e3ce2ac20c4237 not found: ID does not exist" Dec 09 13:35:17 crc kubenswrapper[4970]: I1209 13:35:17.920324 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e57cb0dc-fe18-46df-8d56-61ac26bed69d/kube-state-metrics/0.log" Dec 09 13:35:18 crc kubenswrapper[4970]: I1209 13:35:18.102117 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_2e49c009-8457-419f-aca0-a0288e55ec6d/mysqld-exporter/0.log" Dec 09 13:35:18 crc kubenswrapper[4970]: I1209 13:35:18.464666 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dd4c8c86f-7zktb_03bc9e94-3198-4707-b3b4-19ee20b49d4d/neutron-api/0.log" Dec 09 13:35:18 crc kubenswrapper[4970]: I1209 13:35:18.597634 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-dd4c8c86f-7zktb_03bc9e94-3198-4707-b3b4-19ee20b49d4d/neutron-httpd/0.log" Dec 09 13:35:18 crc kubenswrapper[4970]: I1209 13:35:18.920394 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7313fcc0-6c4b-4008-a232-d1d8a351fa13/nova-api-log/0.log" Dec 09 13:35:18 crc kubenswrapper[4970]: I1209 13:35:18.969804 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_424dcec5-0989-4b2e-8435-a2767dba2505/nova-cell0-conductor-conductor/0.log" Dec 09 13:35:19 crc kubenswrapper[4970]: I1209 13:35:19.201132 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_7313fcc0-6c4b-4008-a232-d1d8a351fa13/nova-api-api/0.log" Dec 09 13:35:19 crc kubenswrapper[4970]: I1209 13:35:19.255208 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_98c2dbed-449e-4db4-9f2b-5191b03c8a80/nova-cell1-conductor-conductor/0.log" Dec 09 13:35:19 crc kubenswrapper[4970]: I1209 13:35:19.443406 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_3b94a74d-8219-4cba-b0a1-511a0086c0ad/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 13:35:19 crc kubenswrapper[4970]: I1209 13:35:19.586188 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_84a18921-52f6-4481-b8ea-cb0f41219e9e/nova-metadata-log/0.log" Dec 09 13:35:19 crc kubenswrapper[4970]: I1209 13:35:19.897328 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_29b2ee8d-3020-4b93-80f5-43070a0d4384/nova-scheduler-scheduler/0.log" Dec 09 13:35:19 crc kubenswrapper[4970]: I1209 13:35:19.950091 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d67f963f-36c2-4056-8b35-5a08e547ba33/mysql-bootstrap/0.log" Dec 09 13:35:20 crc kubenswrapper[4970]: I1209 13:35:20.138306 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d67f963f-36c2-4056-8b35-5a08e547ba33/mysql-bootstrap/0.log" Dec 09 13:35:20 crc kubenswrapper[4970]: I1209 13:35:20.241368 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_d67f963f-36c2-4056-8b35-5a08e547ba33/galera/0.log" Dec 09 13:35:20 crc kubenswrapper[4970]: I1209 13:35:20.383202 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_efb35edd-0684-4604-87bb-66e26970a864/mysql-bootstrap/0.log" Dec 09 13:35:20 crc kubenswrapper[4970]: I1209 13:35:20.589772 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_efb35edd-0684-4604-87bb-66e26970a864/galera/0.log" Dec 09 13:35:20 crc kubenswrapper[4970]: I1209 13:35:20.594474 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_efb35edd-0684-4604-87bb-66e26970a864/mysql-bootstrap/0.log" Dec 09 13:35:20 crc kubenswrapper[4970]: I1209 13:35:20.790387 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_26afaabd-9309-47db-a9fd-282425d0c44e/openstackclient/0.log" Dec 09 13:35:21 crc kubenswrapper[4970]: I1209 13:35:21.439512 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_84a18921-52f6-4481-b8ea-cb0f41219e9e/nova-metadata-metadata/0.log" Dec 09 13:35:21 crc kubenswrapper[4970]: I1209 13:35:21.503275 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bcbw5_337b2685-e8d8-4124-b1f6-d952d8939fb2/openstack-network-exporter/0.log" Dec 09 13:35:21 crc kubenswrapper[4970]: I1209 13:35:21.512001 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-hqfv8_c06ee73b-4168-4ef3-b268-db5e976febbf/ovn-controller/0.log" Dec 09 13:35:21 crc kubenswrapper[4970]: I1209 13:35:21.812757 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:35:21 crc kubenswrapper[4970]: I1209 13:35:21.951735 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d9vtq_30b05330-faf4-44e1-afee-1c750e234a37/ovsdb-server-init/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.154594 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d9vtq_30b05330-faf4-44e1-afee-1c750e234a37/ovsdb-server/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.176623 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d9vtq_30b05330-faf4-44e1-afee-1c750e234a37/ovsdb-server-init/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.204036 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-d9vtq_30b05330-faf4-44e1-afee-1c750e234a37/ovs-vswitchd/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.410077 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2076cff1-9fb3-45f4-99db-d2aa56cafc96/openstack-network-exporter/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.492847 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_2076cff1-9fb3-45f4-99db-d2aa56cafc96/ovn-northd/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.558458 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_310f94d5-9c85-470d-a381-a34ea67ba43b/openstack-network-exporter/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.615343 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_310f94d5-9c85-470d-a381-a34ea67ba43b/ovsdbserver-nb/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.763063 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_84ec21d9-6227-439f-984f-1d48a7fdd5b9/openstack-network-exporter/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.793283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"cabf3221a15b6dc1a60a9e742837fab6972bbbc7f9f958e52f374513daac7197"} Dec 09 13:35:22 crc kubenswrapper[4970]: I1209 13:35:22.794820 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_84ec21d9-6227-439f-984f-1d48a7fdd5b9/ovsdbserver-sb/0.log" Dec 09 13:35:22 crc kubenswrapper[4970]: E1209 13:35:22.815031 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:35:23 crc kubenswrapper[4970]: I1209 13:35:23.470121 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78687fd956-j7ckc_fa1854f6-8032-4a50-8808-cbd83782deb5/placement-api/0.log" Dec 09 13:35:23 crc kubenswrapper[4970]: I1209 13:35:23.704114 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-78687fd956-j7ckc_fa1854f6-8032-4a50-8808-cbd83782deb5/placement-log/0.log" Dec 09 13:35:23 crc kubenswrapper[4970]: I1209 13:35:23.724510 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e993a7f-0aee-41c5-adb3-a3becd49066f/init-config-reloader/0.log" Dec 09 13:35:23 crc kubenswrapper[4970]: I1209 13:35:23.861313 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e993a7f-0aee-41c5-adb3-a3becd49066f/init-config-reloader/0.log" Dec 09 13:35:23 crc kubenswrapper[4970]: I1209 13:35:23.927105 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e993a7f-0aee-41c5-adb3-a3becd49066f/config-reloader/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.011343 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e993a7f-0aee-41c5-adb3-a3becd49066f/thanos-sidecar/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.039209 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3e993a7f-0aee-41c5-adb3-a3becd49066f/prometheus/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.179153 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_026b54d0-03a5-4346-b137-1d297204d22b/setup-container/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.359813 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_026b54d0-03a5-4346-b137-1d297204d22b/setup-container/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.410127 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_026b54d0-03a5-4346-b137-1d297204d22b/rabbitmq/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.479779 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2547ef6a-6a22-4564-8db0-8c7ed5b166fd/setup-container/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.733429 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9vdrf_bf06b570-9bab-4378-9c0c-64d8faabea85/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.751604 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2547ef6a-6a22-4564-8db0-8c7ed5b166fd/setup-container/0.log" Dec 09 13:35:24 crc kubenswrapper[4970]: I1209 13:35:24.772337 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_2547ef6a-6a22-4564-8db0-8c7ed5b166fd/rabbitmq/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.002031 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-4p2n7_dbc6cc54-344d-47db-aae7-ce10a0b4ea3a/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.220053 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85bf8b6f7-tq5wr_ac21407c-a381-4cbb-b26e-9556d92ae621/proxy-httpd/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.405676 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85bf8b6f7-tq5wr_ac21407c-a381-4cbb-b26e-9556d92ae621/proxy-server/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.488109 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kvgg9_5849599a-f7e9-4ea2-982c-5388be3d7e8d/swift-ring-rebalance/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.632617 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/account-auditor/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.730385 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/account-reaper/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.772760 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/account-replicator/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.774525 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/account-server/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.891868 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/container-auditor/0.log" Dec 09 13:35:25 crc kubenswrapper[4970]: I1209 13:35:25.981934 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/container-replicator/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.032647 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/container-server/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.043696 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/container-updater/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.132126 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/object-auditor/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.235085 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/object-replicator/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.237368 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/object-expirer/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.255875 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/object-server/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.356188 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/object-updater/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.450490 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/swift-recon-cron/0.log" Dec 09 13:35:26 crc kubenswrapper[4970]: I1209 13:35:26.474606 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_bee29a58-7867-4543-bb4e-c19528625b1a/rsync/0.log" Dec 09 13:35:28 crc kubenswrapper[4970]: E1209 13:35:28.814552 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:35:29 crc kubenswrapper[4970]: I1209 13:35:29.823814 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_bbfa0031-fd11-45da-a991-36ef550cf64c/memcached/0.log" Dec 09 13:35:36 crc kubenswrapper[4970]: E1209 13:35:36.816600 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:35:40 crc kubenswrapper[4970]: E1209 13:35:40.750760 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:35:43 crc kubenswrapper[4970]: I1209 13:35:43.550931 4970 scope.go:117] "RemoveContainer" containerID="3fe4f59446d944ec5f3d2907a85b7c95d6cf0fed4c0ae1c3c9fec3f1df03c342" Dec 09 13:35:43 crc kubenswrapper[4970]: I1209 13:35:43.589230 4970 scope.go:117] "RemoveContainer" containerID="92599d0ed314702437c9148095a562142522742df76b1f6b774e2377368faba6" Dec 09 13:35:49 crc kubenswrapper[4970]: E1209 13:35:49.819838 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:35:54 crc kubenswrapper[4970]: E1209 13:35:54.815735 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:35:56 crc kubenswrapper[4970]: I1209 13:35:56.628083 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/util/0.log" Dec 09 13:35:56 crc kubenswrapper[4970]: I1209 13:35:56.857218 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/util/0.log" Dec 09 13:35:56 crc kubenswrapper[4970]: I1209 13:35:56.861605 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/pull/0.log" Dec 09 13:35:56 crc kubenswrapper[4970]: I1209 13:35:56.866218 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/pull/0.log" Dec 09 13:35:56 crc kubenswrapper[4970]: I1209 13:35:56.995421 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/util/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.060200 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/pull/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.074699 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c3d187de28e81e2aa863f5c385ad60315fa95c68b744528ef728d9f38cs8h2_3ea53e6e-c8b4-430b-a285-fe671716853a/extract/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.183863 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-hgtz4_9a6da1be-c547-49c2-839c-aa549a5bb32b/kube-rbac-proxy/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.270376 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-hgtz4_9a6da1be-c547-49c2-839c-aa549a5bb32b/manager/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.320994 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jxd72_4190e9a5-5bac-4645-a59d-5b4d6308f751/kube-rbac-proxy/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.455589 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-jxd72_4190e9a5-5bac-4645-a59d-5b4d6308f751/manager/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.478304 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-t9b58_774ea159-6ac5-4997-8630-db954e22ac28/kube-rbac-proxy/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.538214 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-t9b58_774ea159-6ac5-4997-8630-db954e22ac28/manager/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.669608 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-rcr7g_c60ee2ea-0489-422a-b829-e20040144965/kube-rbac-proxy/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.749920 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-rcr7g_c60ee2ea-0489-422a-b829-e20040144965/manager/0.log" Dec 09 13:35:57 crc kubenswrapper[4970]: I1209 13:35:57.922719 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-9q7rj_0b0a360a-e011-471c-abfa-6b72d7bf3074/kube-rbac-proxy/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.053678 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-vf6rm_ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e/kube-rbac-proxy/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.072456 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-9q7rj_0b0a360a-e011-471c-abfa-6b72d7bf3074/manager/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.131921 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-vf6rm_ccc49957-0ec2-4fa0-b2c0-fd86af0aa27e/manager/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.252689 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-smd4b_58aea9ad-c500-4d8b-ae24-72d3b76e2c93/kube-rbac-proxy/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.436753 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-dh6nb_c0d580f7-424c-482e-a3c5-47ef1b9a6b79/kube-rbac-proxy/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.483719 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-dh6nb_c0d580f7-424c-482e-a3c5-47ef1b9a6b79/manager/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.524440 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-smd4b_58aea9ad-c500-4d8b-ae24-72d3b76e2c93/manager/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.617141 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-rcmhf_e0b99715-61ea-4c11-b1df-814886d310a2/kube-rbac-proxy/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.718524 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-rcmhf_e0b99715-61ea-4c11-b1df-814886d310a2/manager/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.747029 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-5sz7k_3c4587c1-58af-46c3-b886-59f5e44220fb/kube-rbac-proxy/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.823500 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-5sz7k_3c4587c1-58af-46c3-b886-59f5e44220fb/manager/0.log" Dec 09 13:35:58 crc kubenswrapper[4970]: I1209 13:35:58.913539 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-rt8fj_ddb7e093-4817-4ec2-9f81-9779ea2dddc9/kube-rbac-proxy/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.008538 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-rt8fj_ddb7e093-4817-4ec2-9f81-9779ea2dddc9/manager/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.090885 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-vb4rz_13a1643a-f17b-435a-8ce6-60f253571bf2/kube-rbac-proxy/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.137932 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-vb4rz_13a1643a-f17b-435a-8ce6-60f253571bf2/manager/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.187554 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-4w8vj_9b238f80-d845-4791-a567-08f03974f612/kube-rbac-proxy/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.358447 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-4w8vj_9b238f80-d845-4791-a567-08f03974f612/manager/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.384740 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pvc2v_d0573bd9-628b-42e9-a46c-5c8b47bd977f/kube-rbac-proxy/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.437349 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-pvc2v_d0573bd9-628b-42e9-a46c-5c8b47bd977f/manager/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.555567 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8r88z_64b64284-dc99-424f-959f-2ed95a4ff4be/kube-rbac-proxy/0.log" Dec 09 13:35:59 crc kubenswrapper[4970]: I1209 13:35:59.590154 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8r88z_64b64284-dc99-424f-959f-2ed95a4ff4be/manager/0.log" Dec 09 13:36:00 crc kubenswrapper[4970]: I1209 13:36:00.006820 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-6979fbd8bc-8wfhf_b972f2ed-23f7-46f1-85c1-ddc586fee0a6/operator/0.log" Dec 09 13:36:00 crc kubenswrapper[4970]: I1209 13:36:00.030562 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-pjlbc_7013b411-c4c4-4706-b5f3-da18ffccb4e5/registry-server/0.log" Dec 09 13:36:00 crc kubenswrapper[4970]: I1209 13:36:00.846508 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-j6znq_661d70f4-459d-44fe-874b-e24f33654af6/kube-rbac-proxy/0.log" Dec 09 13:36:00 crc kubenswrapper[4970]: I1209 13:36:00.918564 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-zhvtf_c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7/kube-rbac-proxy/0.log" Dec 09 13:36:00 crc kubenswrapper[4970]: I1209 13:36:00.949870 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-j6znq_661d70f4-459d-44fe-874b-e24f33654af6/manager/0.log" Dec 09 13:36:00 crc kubenswrapper[4970]: I1209 13:36:00.954442 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-586c894b5-qkxjg_2b6b980c-c0f6-4a0f-a484-63e90086ba35/manager/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.039129 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-zhvtf_c0d05cf0-6d0f-423b-84c7-6ec1ef1cacc7/manager/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.125433 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-kg572_5a6d30e8-9d6b-47ef-9c17-351179967d04/kube-rbac-proxy/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.147294 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2pdw2_40c6e40b-d51a-482f-b1b7-585a064c9d00/operator/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.241355 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-kg572_5a6d30e8-9d6b-47ef-9c17-351179967d04/manager/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.304083 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-797ff5dd46-9dm7h_8c8f9bbb-5933-4c81-a7ff-db3f5f74835e/kube-rbac-proxy/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.510367 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-mcq2h_a8455557-ebef-4da8-af18-aff995d6c3c3/kube-rbac-proxy/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.559593 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-797ff5dd46-9dm7h_8c8f9bbb-5933-4c81-a7ff-db3f5f74835e/manager/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.571724 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-mcq2h_a8455557-ebef-4da8-af18-aff995d6c3c3/manager/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.602689 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-mcv95_567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4/kube-rbac-proxy/0.log" Dec 09 13:36:01 crc kubenswrapper[4970]: I1209 13:36:01.679373 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-mcv95_567ca4c8-3a7b-40c8-98a9-fe162fd7a7b4/manager/0.log" Dec 09 13:36:04 crc kubenswrapper[4970]: E1209 13:36:04.814184 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:36:08 crc kubenswrapper[4970]: E1209 13:36:08.815320 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:36:15 crc kubenswrapper[4970]: E1209 13:36:15.816638 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:36:20 crc kubenswrapper[4970]: E1209 13:36:20.814567 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:36:22 crc kubenswrapper[4970]: I1209 13:36:22.856521 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-8lbhv_8709261f-420d-4fee-908c-a7e1074959cb/control-plane-machine-set-operator/0.log" Dec 09 13:36:23 crc kubenswrapper[4970]: I1209 13:36:23.289409 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhcfv_a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8/kube-rbac-proxy/0.log" Dec 09 13:36:23 crc kubenswrapper[4970]: I1209 13:36:23.326370 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhcfv_a51c80aa-c8f8-4ddf-89e7-fae8e5a7ecc8/machine-api-operator/0.log" Dec 09 13:36:27 crc kubenswrapper[4970]: E1209 13:36:27.828560 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:36:33 crc kubenswrapper[4970]: E1209 13:36:33.814290 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:36:39 crc kubenswrapper[4970]: I1209 13:36:39.422387 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-8rjw8_afafc1e2-ad95-4e4b-878d-62163ffa77cf/cert-manager-controller/0.log" Dec 09 13:36:39 crc kubenswrapper[4970]: I1209 13:36:39.521509 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-rtzqq_fc26ad95-72b7-4cfa-9e37-e9aa0a80bc55/cert-manager-cainjector/0.log" Dec 09 13:36:39 crc kubenswrapper[4970]: I1209 13:36:39.610114 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2jdph_ceaa531e-a688-48ed-a405-74b9cb483ae2/cert-manager-webhook/0.log" Dec 09 13:36:40 crc kubenswrapper[4970]: E1209 13:36:40.816809 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:36:43 crc kubenswrapper[4970]: I1209 13:36:43.706049 4970 scope.go:117] "RemoveContainer" containerID="cd2b1a1e0409a1efe10425f247e0c843045c609ee69455544b0360a3fe31af2c" Dec 09 13:36:47 crc kubenswrapper[4970]: E1209 13:36:47.822589 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.062941 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lmhp4"] Dec 09 13:36:50 crc kubenswrapper[4970]: E1209 13:36:50.064168 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="extract-content" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.064187 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="extract-content" Dec 09 13:36:50 crc kubenswrapper[4970]: E1209 13:36:50.064206 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="registry-server" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.064216 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="registry-server" Dec 09 13:36:50 crc kubenswrapper[4970]: E1209 13:36:50.066377 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="extract-utilities" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.066403 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="extract-utilities" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.066774 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="625c3635-ba7c-4b92-bd1e-9d2f293dc14c" containerName="registry-server" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.069915 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.091471 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmhp4"] Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.100719 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-catalog-content\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.101146 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89phs\" (UniqueName: \"kubernetes.io/projected/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-kube-api-access-89phs\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.101323 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-utilities\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.204304 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89phs\" (UniqueName: \"kubernetes.io/projected/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-kube-api-access-89phs\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.204779 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-utilities\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.204970 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-catalog-content\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.205346 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-utilities\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.205491 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-catalog-content\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.234080 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89phs\" (UniqueName: \"kubernetes.io/projected/dcbc7a48-488c-48e3-aab3-94f8e44ea7ec-kube-api-access-89phs\") pod \"certified-operators-lmhp4\" (UID: \"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec\") " pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.409311 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:36:50 crc kubenswrapper[4970]: W1209 13:36:50.980777 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddcbc7a48_488c_48e3_aab3_94f8e44ea7ec.slice/crio-a6efa5d2b3893ae0c178e1d780408ebfe5fb72503726f8a463b7e4b5ecce86e5 WatchSource:0}: Error finding container a6efa5d2b3893ae0c178e1d780408ebfe5fb72503726f8a463b7e4b5ecce86e5: Status 404 returned error can't find the container with id a6efa5d2b3893ae0c178e1d780408ebfe5fb72503726f8a463b7e4b5ecce86e5 Dec 09 13:36:50 crc kubenswrapper[4970]: I1209 13:36:50.990120 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmhp4"] Dec 09 13:36:51 crc kubenswrapper[4970]: I1209 13:36:51.623049 4970 generic.go:334] "Generic (PLEG): container finished" podID="dcbc7a48-488c-48e3-aab3-94f8e44ea7ec" containerID="0a542cbd743e08455d27e40cd23e5209a852afeb5d0cbf619f9ece94bae4baca" exitCode=0 Dec 09 13:36:51 crc kubenswrapper[4970]: I1209 13:36:51.623090 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmhp4" event={"ID":"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec","Type":"ContainerDied","Data":"0a542cbd743e08455d27e40cd23e5209a852afeb5d0cbf619f9ece94bae4baca"} Dec 09 13:36:51 crc kubenswrapper[4970]: I1209 13:36:51.625178 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmhp4" event={"ID":"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec","Type":"ContainerStarted","Data":"a6efa5d2b3893ae0c178e1d780408ebfe5fb72503726f8a463b7e4b5ecce86e5"} Dec 09 13:36:55 crc kubenswrapper[4970]: E1209 13:36:55.819750 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:36:57 crc kubenswrapper[4970]: I1209 13:36:57.624781 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-zflr5_b85a505e-070c-425e-a78b-af7d0aed981f/nmstate-console-plugin/0.log" Dec 09 13:36:57 crc kubenswrapper[4970]: I1209 13:36:57.807982 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-zddxq_364b73a8-4e46-496e-a629-2a9ec738c9ba/nmstate-handler/0.log" Dec 09 13:36:57 crc kubenswrapper[4970]: I1209 13:36:57.871332 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jwgml_4cbc766b-5faa-43b1-ab55-5d25e23ee20e/kube-rbac-proxy/0.log" Dec 09 13:36:57 crc kubenswrapper[4970]: I1209 13:36:57.883831 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-jwgml_4cbc766b-5faa-43b1-ab55-5d25e23ee20e/nmstate-metrics/0.log" Dec 09 13:36:58 crc kubenswrapper[4970]: I1209 13:36:58.100590 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-xjkb9_9ed7884d-3f8f-4cc8-b2fa-26b40bbdfdc2/nmstate-operator/0.log" Dec 09 13:36:58 crc kubenswrapper[4970]: I1209 13:36:58.123740 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-pscqd_74bf2616-c17d-4a12-89d4-416af30ff01a/nmstate-webhook/0.log" Dec 09 13:36:58 crc kubenswrapper[4970]: I1209 13:36:58.705159 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmhp4" event={"ID":"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec","Type":"ContainerStarted","Data":"21c40c32b8bf1cd2bd520c2dee4b660c15dfd5186f80e38766a88797ca870dd8"} Dec 09 13:36:58 crc kubenswrapper[4970]: E1209 13:36:58.816775 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:36:59 crc kubenswrapper[4970]: I1209 13:36:59.717647 4970 generic.go:334] "Generic (PLEG): container finished" podID="dcbc7a48-488c-48e3-aab3-94f8e44ea7ec" containerID="21c40c32b8bf1cd2bd520c2dee4b660c15dfd5186f80e38766a88797ca870dd8" exitCode=0 Dec 09 13:36:59 crc kubenswrapper[4970]: I1209 13:36:59.717685 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmhp4" event={"ID":"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec","Type":"ContainerDied","Data":"21c40c32b8bf1cd2bd520c2dee4b660c15dfd5186f80e38766a88797ca870dd8"} Dec 09 13:37:01 crc kubenswrapper[4970]: I1209 13:37:01.738195 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lmhp4" event={"ID":"dcbc7a48-488c-48e3-aab3-94f8e44ea7ec","Type":"ContainerStarted","Data":"c5835c59f3284105345873e0aea8e4a538e48d06a68a52f88b75d573eb50f9ad"} Dec 09 13:37:01 crc kubenswrapper[4970]: I1209 13:37:01.762949 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lmhp4" podStartSLOduration=2.7010901499999997 podStartE2EDuration="11.762888107s" podCreationTimestamp="2025-12-09 13:36:50 +0000 UTC" firstStartedPulling="2025-12-09 13:36:51.625103427 +0000 UTC m=+5424.185584488" lastFinishedPulling="2025-12-09 13:37:00.686901394 +0000 UTC m=+5433.247382445" observedRunningTime="2025-12-09 13:37:01.751329708 +0000 UTC m=+5434.311810759" watchObservedRunningTime="2025-12-09 13:37:01.762888107 +0000 UTC m=+5434.323369158" Dec 09 13:37:06 crc kubenswrapper[4970]: E1209 13:37:06.816114 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:37:10 crc kubenswrapper[4970]: I1209 13:37:10.409700 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:37:10 crc kubenswrapper[4970]: I1209 13:37:10.410432 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.016201 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.071780 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lmhp4" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.159490 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lmhp4"] Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.264635 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsmtp"] Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.264885 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wsmtp" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="registry-server" containerID="cri-o://3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4" gracePeriod=2 Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.805047 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.844298 4970 generic.go:334] "Generic (PLEG): container finished" podID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerID="3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4" exitCode=0 Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.845192 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsmtp" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.845727 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerDied","Data":"3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4"} Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.845776 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsmtp" event={"ID":"eb741795-9a51-4f64-9519-b857c46d3c1d","Type":"ContainerDied","Data":"f1bc8f42714f951d4c527f90a5d217d6791358344651ec82fe8393c6204a5187"} Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.845794 4970 scope.go:117] "RemoveContainer" containerID="3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.890128 4970 scope.go:117] "RemoveContainer" containerID="621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.919792 4970 scope.go:117] "RemoveContainer" containerID="b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.935535 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzp5w\" (UniqueName: \"kubernetes.io/projected/eb741795-9a51-4f64-9519-b857c46d3c1d-kube-api-access-vzp5w\") pod \"eb741795-9a51-4f64-9519-b857c46d3c1d\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.935612 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-catalog-content\") pod \"eb741795-9a51-4f64-9519-b857c46d3c1d\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.935763 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-utilities\") pod \"eb741795-9a51-4f64-9519-b857c46d3c1d\" (UID: \"eb741795-9a51-4f64-9519-b857c46d3c1d\") " Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.938495 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-utilities" (OuterVolumeSpecName: "utilities") pod "eb741795-9a51-4f64-9519-b857c46d3c1d" (UID: "eb741795-9a51-4f64-9519-b857c46d3c1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:37:11 crc kubenswrapper[4970]: I1209 13:37:11.945362 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb741795-9a51-4f64-9519-b857c46d3c1d-kube-api-access-vzp5w" (OuterVolumeSpecName: "kube-api-access-vzp5w") pod "eb741795-9a51-4f64-9519-b857c46d3c1d" (UID: "eb741795-9a51-4f64-9519-b857c46d3c1d"). InnerVolumeSpecName "kube-api-access-vzp5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.019476 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb741795-9a51-4f64-9519-b857c46d3c1d" (UID: "eb741795-9a51-4f64-9519-b857c46d3c1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.026605 4970 scope.go:117] "RemoveContainer" containerID="3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4" Dec 09 13:37:12 crc kubenswrapper[4970]: E1209 13:37:12.027563 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4\": container with ID starting with 3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4 not found: ID does not exist" containerID="3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.027600 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4"} err="failed to get container status \"3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4\": rpc error: code = NotFound desc = could not find container \"3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4\": container with ID starting with 3229f7bf2767eae958a492a3edd8bf8824a8881c5c232716195699725b04c5c4 not found: ID does not exist" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.027620 4970 scope.go:117] "RemoveContainer" containerID="621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b" Dec 09 13:37:12 crc kubenswrapper[4970]: E1209 13:37:12.027973 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b\": container with ID starting with 621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b not found: ID does not exist" containerID="621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.028007 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b"} err="failed to get container status \"621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b\": rpc error: code = NotFound desc = could not find container \"621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b\": container with ID starting with 621ad669847615f5a7402363a65f85a8aa733674b66921244d6869db9e9bb01b not found: ID does not exist" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.028021 4970 scope.go:117] "RemoveContainer" containerID="b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68" Dec 09 13:37:12 crc kubenswrapper[4970]: E1209 13:37:12.028296 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68\": container with ID starting with b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68 not found: ID does not exist" containerID="b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.028413 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68"} err="failed to get container status \"b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68\": rpc error: code = NotFound desc = could not find container \"b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68\": container with ID starting with b39fdabdca4326243b59d6b38e25d35c619e3cfbc6a19125d7c196ff9d4fbd68 not found: ID does not exist" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.045048 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzp5w\" (UniqueName: \"kubernetes.io/projected/eb741795-9a51-4f64-9519-b857c46d3c1d-kube-api-access-vzp5w\") on node \"crc\" DevicePath \"\"" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.045260 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.045323 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb741795-9a51-4f64-9519-b857c46d3c1d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.187075 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsmtp"] Dec 09 13:37:12 crc kubenswrapper[4970]: I1209 13:37:12.198170 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wsmtp"] Dec 09 13:37:12 crc kubenswrapper[4970]: E1209 13:37:12.814754 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:37:13 crc kubenswrapper[4970]: I1209 13:37:13.849626 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" path="/var/lib/kubelet/pods/eb741795-9a51-4f64-9519-b857c46d3c1d/volumes" Dec 09 13:37:14 crc kubenswrapper[4970]: I1209 13:37:14.336610 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-695fd4cd57-pk8wf_b71fdf7a-ee27-4791-9b67-326b387accae/manager/0.log" Dec 09 13:37:14 crc kubenswrapper[4970]: I1209 13:37:14.341392 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-695fd4cd57-pk8wf_b71fdf7a-ee27-4791-9b67-326b387accae/kube-rbac-proxy/0.log" Dec 09 13:37:17 crc kubenswrapper[4970]: E1209 13:37:17.830603 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:37:23 crc kubenswrapper[4970]: E1209 13:37:23.815955 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:37:31 crc kubenswrapper[4970]: I1209 13:37:31.764709 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-4tr6l_f4784e64-4938-4478-808e-f17b945fcd60/cluster-logging-operator/0.log" Dec 09 13:37:31 crc kubenswrapper[4970]: E1209 13:37:31.819015 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:37:31 crc kubenswrapper[4970]: I1209 13:37:31.903189 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-42tlr_f7587e06-2e8b-4f39-a00d-edcda46d5cdf/collector/0.log" Dec 09 13:37:32 crc kubenswrapper[4970]: I1209 13:37:32.065355 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_509f9ecc-ffea-4205-b9af-3fca1ca6f58d/loki-compactor/0.log" Dec 09 13:37:32 crc kubenswrapper[4970]: I1209 13:37:32.115788 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-kh68l_8f98b088-8ae1-4d5a-9917-36d3c95bf08f/loki-distributor/0.log" Dec 09 13:37:32 crc kubenswrapper[4970]: I1209 13:37:32.412150 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7c9c75f6cc-rxvv4_570b42ae-35db-456f-933f-728031536759/opa/0.log" Dec 09 13:37:32 crc kubenswrapper[4970]: I1209 13:37:32.444940 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7c9c75f6cc-rxvv4_570b42ae-35db-456f-933f-728031536759/gateway/0.log" Dec 09 13:37:32 crc kubenswrapper[4970]: I1209 13:37:32.910728 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7c9c75f6cc-v4gvv_e564b02f-7c51-411f-9c17-6a8e9aa357d0/opa/0.log" Dec 09 13:37:32 crc kubenswrapper[4970]: I1209 13:37:32.922003 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-7c9c75f6cc-v4gvv_e564b02f-7c51-411f-9c17-6a8e9aa357d0/gateway/0.log" Dec 09 13:37:33 crc kubenswrapper[4970]: I1209 13:37:33.005206 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_c95ea220-561f-4069-8a68-cda2d45c834b/loki-index-gateway/0.log" Dec 09 13:37:33 crc kubenswrapper[4970]: I1209 13:37:33.224577 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_773f5e49-3672-4662-a3e7-cfddb3f3ded6/loki-ingester/0.log" Dec 09 13:37:33 crc kubenswrapper[4970]: I1209 13:37:33.233491 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-pj2ck_5a4976c0-636d-4915-97ae-f8fe8cfebb95/loki-querier/0.log" Dec 09 13:37:33 crc kubenswrapper[4970]: I1209 13:37:33.399592 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-zmggt_6c30caa0-938a-4ffc-b8e2-0c418d04e6f7/loki-query-frontend/0.log" Dec 09 13:37:35 crc kubenswrapper[4970]: I1209 13:37:35.759325 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="efb35edd-0684-4604-87bb-66e26970a864" containerName="galera" probeResult="failure" output="command timed out" Dec 09 13:37:38 crc kubenswrapper[4970]: E1209 13:37:38.814911 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:37:42 crc kubenswrapper[4970]: E1209 13:37:42.814723 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:37:46 crc kubenswrapper[4970]: I1209 13:37:46.010565 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:37:46 crc kubenswrapper[4970]: I1209 13:37:46.011073 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:37:49 crc kubenswrapper[4970]: E1209 13:37:49.814182 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:37:50 crc kubenswrapper[4970]: I1209 13:37:50.905738 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jz7r5_f892cb08-861f-422e-a771-d6f55d6d5756/kube-rbac-proxy/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.052651 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jz7r5_f892cb08-861f-422e-a771-d6f55d6d5756/controller/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.218934 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-frr-files/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.475647 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-reloader/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.479511 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-reloader/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.486139 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-frr-files/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.492941 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-metrics/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.664966 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-metrics/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.705269 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-frr-files/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.710332 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-reloader/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.716205 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-metrics/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.948325 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-reloader/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.975813 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-frr-files/0.log" Dec 09 13:37:51 crc kubenswrapper[4970]: I1209 13:37:51.993422 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/cp-metrics/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.032704 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/controller/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.206838 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/frr-metrics/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.226693 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/kube-rbac-proxy-frr/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.272838 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/kube-rbac-proxy/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.472731 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-m4p9q_9321dbf0-1bca-4851-a191-4b1edaa50a77/frr-k8s-webhook-server/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.474539 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/reloader/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.770205 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5cc866bf98-fwgtr_2f6777ac-ebe1-4757-9d6f-4bdced219b19/manager/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.949877 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhtq7_70a13e7f-528a-42d6-aca8-c2b7ef94a8f7/kube-rbac-proxy/0.log" Dec 09 13:37:52 crc kubenswrapper[4970]: I1209 13:37:52.998407 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7845f8588c-zdsl7_5e09bbff-e2ce-423d-98e0-ee5305d47cce/webhook-server/0.log" Dec 09 13:37:53 crc kubenswrapper[4970]: I1209 13:37:53.738903 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jhtq7_70a13e7f-528a-42d6-aca8-c2b7ef94a8f7/speaker/0.log" Dec 09 13:37:53 crc kubenswrapper[4970]: I1209 13:37:53.739073 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-l4f9b_34bec0ed-386a-4792-b315-133688468971/frr/0.log" Dec 09 13:37:57 crc kubenswrapper[4970]: E1209 13:37:57.824517 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:38:04 crc kubenswrapper[4970]: E1209 13:38:04.815761 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.267428 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/util/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.413675 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/util/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.456846 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/pull/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.464131 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/pull/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.603644 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/util/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.621561 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/pull/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.858714 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/util/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.858976 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb8n45x4_064f1a68-bece-4a7c-b759-f73831fe100b/extract/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.946056 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/util/0.log" Dec 09 13:38:07 crc kubenswrapper[4970]: I1209 13:38:07.983079 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/pull/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.022227 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/pull/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.176006 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/pull/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.194389 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/util/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.203633 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fmjjns_a193be7a-f0ec-4f93-a8f8-37dc88376a5e/extract/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.345423 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/util/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.506890 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/util/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.542215 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/pull/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.601744 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/pull/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.773032 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/pull/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.774389 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/util/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: E1209 13:38:08.814864 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.826978 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gg4rh_ce40a378-1c98-4f0e-9f2d-065ea5718fb6/extract/0.log" Dec 09 13:38:08 crc kubenswrapper[4970]: I1209 13:38:08.971062 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/util/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.155846 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/util/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.189482 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/pull/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.193210 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/pull/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.335911 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/pull/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.339143 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/util/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.409225 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463fcz8fp_7137ec19-d7d4-44d9-b9ac-6c30c1ec095c/extract/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.523150 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/util/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.698796 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/pull/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.703612 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/pull/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.752110 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/util/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.929713 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/pull/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.934727 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/util/0.log" Dec 09 13:38:09 crc kubenswrapper[4970]: I1209 13:38:09.950607 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83v8kt5_264cbeb9-2733-4026-9d3a-f348b5af2ba2/extract/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.112079 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/extract-utilities/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.322441 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/extract-content/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.350226 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/extract-utilities/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.364597 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/extract-content/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.481003 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/extract-utilities/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.503885 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/extract-content/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.687350 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lmhp4_dcbc7a48-488c-48e3-aab3-94f8e44ea7ec/registry-server/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.714453 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/extract-utilities/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.853647 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/extract-content/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.854709 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/extract-utilities/0.log" Dec 09 13:38:10 crc kubenswrapper[4970]: I1209 13:38:10.869034 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/extract-content/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.009427 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/extract-content/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.034722 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/extract-utilities/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.080213 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-nhq2r_026bb6df-f015-4abc-92e2-81d9452dd101/marketplace-operator/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.260406 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/extract-utilities/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.519416 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/extract-content/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.583633 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/extract-content/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.588984 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/extract-utilities/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.842432 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/extract-content/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.855922 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/extract-utilities/0.log" Dec 09 13:38:11 crc kubenswrapper[4970]: I1209 13:38:11.867293 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-bmvnx_304240af-a6fc-4b3b-b99c-b494bfcf0c3e/registry-server/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.081976 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-xp4tp_79ace905-6970-463b-b79a-1412d6a23635/registry-server/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.102021 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/extract-utilities/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.209279 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/extract-utilities/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.228881 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/extract-content/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.237610 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/extract-content/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.393617 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/extract-content/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.395428 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/extract-utilities/0.log" Dec 09 13:38:12 crc kubenswrapper[4970]: I1209 13:38:12.594891 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-f45gl_b97b6381-d7f1-4fcc-8930-34598b257999/registry-server/0.log" Dec 09 13:38:16 crc kubenswrapper[4970]: I1209 13:38:16.011279 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:38:16 crc kubenswrapper[4970]: I1209 13:38:16.011824 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:38:18 crc kubenswrapper[4970]: E1209 13:38:18.816689 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:38:23 crc kubenswrapper[4970]: E1209 13:38:23.816433 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:38:25 crc kubenswrapper[4970]: I1209 13:38:25.723954 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-p56cg_0ccba78f-ff44-4497-85b4-9c66a9289dc8/prometheus-operator/0.log" Dec 09 13:38:25 crc kubenswrapper[4970]: I1209 13:38:25.904640 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b6dddd4fd-9zq9x_d19808d9-8361-4275-a8ec-9ca8e6d7e806/prometheus-operator-admission-webhook/0.log" Dec 09 13:38:26 crc kubenswrapper[4970]: I1209 13:38:26.010433 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6b6dddd4fd-ds949_afd71d3b-2f43-4061-951e-5fe3f5480a0d/prometheus-operator-admission-webhook/0.log" Dec 09 13:38:26 crc kubenswrapper[4970]: I1209 13:38:26.142886 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-xmr67_7eb47195-0ebb-47e8-8685-0523cff07cc4/operator/0.log" Dec 09 13:38:26 crc kubenswrapper[4970]: I1209 13:38:26.187392 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-7d5fb4cbfb-826l7_582b6a7e-cfed-498e-af7e-f93ffe3ad4bd/observability-ui-dashboards/0.log" Dec 09 13:38:26 crc kubenswrapper[4970]: I1209 13:38:26.334877 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-f6dfs_203ce897-84d1-41da-8b0e-7b6ef66698a6/perses-operator/0.log" Dec 09 13:38:29 crc kubenswrapper[4970]: E1209 13:38:29.814508 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:38:36 crc kubenswrapper[4970]: E1209 13:38:36.816750 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:38:43 crc kubenswrapper[4970]: I1209 13:38:43.141952 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-695fd4cd57-pk8wf_b71fdf7a-ee27-4791-9b67-326b387accae/kube-rbac-proxy/0.log" Dec 09 13:38:43 crc kubenswrapper[4970]: I1209 13:38:43.214972 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-695fd4cd57-pk8wf_b71fdf7a-ee27-4791-9b67-326b387accae/manager/0.log" Dec 09 13:38:43 crc kubenswrapper[4970]: E1209 13:38:43.827723 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.010759 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.011102 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.011148 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.012029 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cabf3221a15b6dc1a60a9e742837fab6972bbbc7f9f958e52f374513daac7197"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.012080 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://cabf3221a15b6dc1a60a9e742837fab6972bbbc7f9f958e52f374513daac7197" gracePeriod=600 Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.994601 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="cabf3221a15b6dc1a60a9e742837fab6972bbbc7f9f958e52f374513daac7197" exitCode=0 Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.995128 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"cabf3221a15b6dc1a60a9e742837fab6972bbbc7f9f958e52f374513daac7197"} Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.995161 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerStarted","Data":"f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4"} Dec 09 13:38:46 crc kubenswrapper[4970]: I1209 13:38:46.995184 4970 scope.go:117] "RemoveContainer" containerID="1c4bed9ff93fb37d4e1875bb5924dcc9902f12c57dd22616a64ab19553b2dcbd" Dec 09 13:38:47 crc kubenswrapper[4970]: E1209 13:38:47.823234 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:38:57 crc kubenswrapper[4970]: E1209 13:38:57.822442 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:38:59 crc kubenswrapper[4970]: E1209 13:38:59.817538 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:39:06 crc kubenswrapper[4970]: I1209 13:39:06.755437 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="d67f963f-36c2-4056-8b35-5a08e547ba33" containerName="galera" probeResult="failure" output="command timed out" Dec 09 13:39:12 crc kubenswrapper[4970]: E1209 13:39:12.815594 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:39:13 crc kubenswrapper[4970]: E1209 13:39:13.824076 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:39:25 crc kubenswrapper[4970]: E1209 13:39:25.818988 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:39:25 crc kubenswrapper[4970]: I1209 13:39:25.819233 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:39:25 crc kubenswrapper[4970]: E1209 13:39:25.968310 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:39:25 crc kubenswrapper[4970]: E1209 13:39:25.968391 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:39:25 crc kubenswrapper[4970]: E1209 13:39:25.968574 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:39:25 crc kubenswrapper[4970]: E1209 13:39:25.970724 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:39:37 crc kubenswrapper[4970]: E1209 13:39:37.836164 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:39:38 crc kubenswrapper[4970]: E1209 13:39:38.814398 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:39:51 crc kubenswrapper[4970]: E1209 13:39:51.816409 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:39:52 crc kubenswrapper[4970]: E1209 13:39:52.958722 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:39:52 crc kubenswrapper[4970]: E1209 13:39:52.959183 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:39:52 crc kubenswrapper[4970]: E1209 13:39:52.959410 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:39:52 crc kubenswrapper[4970]: E1209 13:39:52.961073 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:40:04 crc kubenswrapper[4970]: E1209 13:40:04.814839 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:40:06 crc kubenswrapper[4970]: E1209 13:40:06.817981 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:40:18 crc kubenswrapper[4970]: E1209 13:40:18.822053 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.126337 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z8tcx"] Dec 09 13:40:19 crc kubenswrapper[4970]: E1209 13:40:19.128092 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="registry-server" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.128156 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="registry-server" Dec 09 13:40:19 crc kubenswrapper[4970]: E1209 13:40:19.128290 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="extract-utilities" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.128330 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="extract-utilities" Dec 09 13:40:19 crc kubenswrapper[4970]: E1209 13:40:19.128378 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="extract-content" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.128396 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="extract-content" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.129031 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb741795-9a51-4f64-9519-b857c46d3c1d" containerName="registry-server" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.137988 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8tcx"] Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.138135 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.211356 4970 generic.go:334] "Generic (PLEG): container finished" podID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerID="c8879283eab36ba7525e2fd2c4bc4fc3f517ca27c1765a360a9ad1c1c3565996" exitCode=0 Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.211419 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cfjzl/must-gather-mts4w" event={"ID":"10b3359c-b6fa-40ba-bdbf-e374edd3a96a","Type":"ContainerDied","Data":"c8879283eab36ba7525e2fd2c4bc4fc3f517ca27c1765a360a9ad1c1c3565996"} Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.213016 4970 scope.go:117] "RemoveContainer" containerID="c8879283eab36ba7525e2fd2c4bc4fc3f517ca27c1765a360a9ad1c1c3565996" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.295049 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-catalog-content\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.295432 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpbfg\" (UniqueName: \"kubernetes.io/projected/ac762181-383d-4f5a-8414-b71be1e60415-kube-api-access-cpbfg\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.295539 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-utilities\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.397373 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpbfg\" (UniqueName: \"kubernetes.io/projected/ac762181-383d-4f5a-8414-b71be1e60415-kube-api-access-cpbfg\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.397479 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-utilities\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.397642 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-catalog-content\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.398070 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-catalog-content\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.398406 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-utilities\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.423935 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpbfg\" (UniqueName: \"kubernetes.io/projected/ac762181-383d-4f5a-8414-b71be1e60415-kube-api-access-cpbfg\") pod \"community-operators-z8tcx\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.488889 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:19 crc kubenswrapper[4970]: I1209 13:40:19.697765 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cfjzl_must-gather-mts4w_10b3359c-b6fa-40ba-bdbf-e374edd3a96a/gather/0.log" Dec 09 13:40:19 crc kubenswrapper[4970]: E1209 13:40:19.814890 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:40:20 crc kubenswrapper[4970]: I1209 13:40:20.053749 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z8tcx"] Dec 09 13:40:20 crc kubenswrapper[4970]: I1209 13:40:20.222995 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerStarted","Data":"8f1ea97e10b68d669027508ef026d6bef31ee3a10bb718734e938db4ab977fbd"} Dec 09 13:40:21 crc kubenswrapper[4970]: I1209 13:40:21.233528 4970 generic.go:334] "Generic (PLEG): container finished" podID="ac762181-383d-4f5a-8414-b71be1e60415" containerID="e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4" exitCode=0 Dec 09 13:40:21 crc kubenswrapper[4970]: I1209 13:40:21.233599 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerDied","Data":"e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4"} Dec 09 13:40:23 crc kubenswrapper[4970]: I1209 13:40:23.264128 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerStarted","Data":"35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256"} Dec 09 13:40:24 crc kubenswrapper[4970]: I1209 13:40:24.281549 4970 generic.go:334] "Generic (PLEG): container finished" podID="ac762181-383d-4f5a-8414-b71be1e60415" containerID="35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256" exitCode=0 Dec 09 13:40:24 crc kubenswrapper[4970]: I1209 13:40:24.281649 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerDied","Data":"35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256"} Dec 09 13:40:25 crc kubenswrapper[4970]: I1209 13:40:25.299343 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerStarted","Data":"171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac"} Dec 09 13:40:25 crc kubenswrapper[4970]: I1209 13:40:25.329854 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z8tcx" podStartSLOduration=2.8575645769999998 podStartE2EDuration="6.329827921s" podCreationTimestamp="2025-12-09 13:40:19 +0000 UTC" firstStartedPulling="2025-12-09 13:40:21.235922913 +0000 UTC m=+5633.796403964" lastFinishedPulling="2025-12-09 13:40:24.708186217 +0000 UTC m=+5637.268667308" observedRunningTime="2025-12-09 13:40:25.326659926 +0000 UTC m=+5637.887141017" watchObservedRunningTime="2025-12-09 13:40:25.329827921 +0000 UTC m=+5637.890309012" Dec 09 13:40:27 crc kubenswrapper[4970]: I1209 13:40:27.937742 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cfjzl/must-gather-mts4w"] Dec 09 13:40:27 crc kubenswrapper[4970]: I1209 13:40:27.938660 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cfjzl/must-gather-mts4w" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="copy" containerID="cri-o://b19bd12dd370f6889331f7d56bd7698339c463496b7802a061314e42bcc79c46" gracePeriod=2 Dec 09 13:40:27 crc kubenswrapper[4970]: I1209 13:40:27.951235 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cfjzl/must-gather-mts4w"] Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.331978 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cfjzl_must-gather-mts4w_10b3359c-b6fa-40ba-bdbf-e374edd3a96a/copy/0.log" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.332352 4970 generic.go:334] "Generic (PLEG): container finished" podID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerID="b19bd12dd370f6889331f7d56bd7698339c463496b7802a061314e42bcc79c46" exitCode=143 Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.332407 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a366df8e9f9589cc31e9b8b931d2de7ca6b600ec06d25dbdc5dba3bc04f2a208" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.387638 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cfjzl_must-gather-mts4w_10b3359c-b6fa-40ba-bdbf-e374edd3a96a/copy/0.log" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.388854 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.556435 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-must-gather-output\") pod \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.557281 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cb7n\" (UniqueName: \"kubernetes.io/projected/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-kube-api-access-7cb7n\") pod \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\" (UID: \"10b3359c-b6fa-40ba-bdbf-e374edd3a96a\") " Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.570616 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-kube-api-access-7cb7n" (OuterVolumeSpecName: "kube-api-access-7cb7n") pod "10b3359c-b6fa-40ba-bdbf-e374edd3a96a" (UID: "10b3359c-b6fa-40ba-bdbf-e374edd3a96a"). InnerVolumeSpecName "kube-api-access-7cb7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.663415 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cb7n\" (UniqueName: \"kubernetes.io/projected/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-kube-api-access-7cb7n\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.781528 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "10b3359c-b6fa-40ba-bdbf-e374edd3a96a" (UID: "10b3359c-b6fa-40ba-bdbf-e374edd3a96a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:40:28 crc kubenswrapper[4970]: I1209 13:40:28.872837 4970 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10b3359c-b6fa-40ba-bdbf-e374edd3a96a-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:29 crc kubenswrapper[4970]: I1209 13:40:29.341984 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cfjzl/must-gather-mts4w" Dec 09 13:40:29 crc kubenswrapper[4970]: I1209 13:40:29.489550 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:29 crc kubenswrapper[4970]: I1209 13:40:29.489598 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:29 crc kubenswrapper[4970]: I1209 13:40:29.559434 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:29 crc kubenswrapper[4970]: I1209 13:40:29.824925 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" path="/var/lib/kubelet/pods/10b3359c-b6fa-40ba-bdbf-e374edd3a96a/volumes" Dec 09 13:40:30 crc kubenswrapper[4970]: I1209 13:40:30.416709 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:30 crc kubenswrapper[4970]: I1209 13:40:30.487078 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8tcx"] Dec 09 13:40:30 crc kubenswrapper[4970]: E1209 13:40:30.816564 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:40:32 crc kubenswrapper[4970]: I1209 13:40:32.401499 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z8tcx" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="registry-server" containerID="cri-o://171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac" gracePeriod=2 Dec 09 13:40:32 crc kubenswrapper[4970]: E1209 13:40:32.817452 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:40:32 crc kubenswrapper[4970]: I1209 13:40:32.908616 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:32 crc kubenswrapper[4970]: I1209 13:40:32.970354 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpbfg\" (UniqueName: \"kubernetes.io/projected/ac762181-383d-4f5a-8414-b71be1e60415-kube-api-access-cpbfg\") pod \"ac762181-383d-4f5a-8414-b71be1e60415\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " Dec 09 13:40:32 crc kubenswrapper[4970]: I1209 13:40:32.970663 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-utilities\") pod \"ac762181-383d-4f5a-8414-b71be1e60415\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " Dec 09 13:40:32 crc kubenswrapper[4970]: I1209 13:40:32.970750 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-catalog-content\") pod \"ac762181-383d-4f5a-8414-b71be1e60415\" (UID: \"ac762181-383d-4f5a-8414-b71be1e60415\") " Dec 09 13:40:32 crc kubenswrapper[4970]: I1209 13:40:32.974142 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-utilities" (OuterVolumeSpecName: "utilities") pod "ac762181-383d-4f5a-8414-b71be1e60415" (UID: "ac762181-383d-4f5a-8414-b71be1e60415"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.011935 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac762181-383d-4f5a-8414-b71be1e60415-kube-api-access-cpbfg" (OuterVolumeSpecName: "kube-api-access-cpbfg") pod "ac762181-383d-4f5a-8414-b71be1e60415" (UID: "ac762181-383d-4f5a-8414-b71be1e60415"). InnerVolumeSpecName "kube-api-access-cpbfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.079219 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpbfg\" (UniqueName: \"kubernetes.io/projected/ac762181-383d-4f5a-8414-b71be1e60415-kube-api-access-cpbfg\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.079283 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.080697 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac762181-383d-4f5a-8414-b71be1e60415" (UID: "ac762181-383d-4f5a-8414-b71be1e60415"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.182073 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac762181-383d-4f5a-8414-b71be1e60415-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.415058 4970 generic.go:334] "Generic (PLEG): container finished" podID="ac762181-383d-4f5a-8414-b71be1e60415" containerID="171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac" exitCode=0 Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.415100 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerDied","Data":"171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac"} Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.415131 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z8tcx" event={"ID":"ac762181-383d-4f5a-8414-b71be1e60415","Type":"ContainerDied","Data":"8f1ea97e10b68d669027508ef026d6bef31ee3a10bb718734e938db4ab977fbd"} Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.415150 4970 scope.go:117] "RemoveContainer" containerID="171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.415328 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z8tcx" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.450897 4970 scope.go:117] "RemoveContainer" containerID="35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.455955 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z8tcx"] Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.469635 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z8tcx"] Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.492616 4970 scope.go:117] "RemoveContainer" containerID="e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.531854 4970 scope.go:117] "RemoveContainer" containerID="171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac" Dec 09 13:40:33 crc kubenswrapper[4970]: E1209 13:40:33.532621 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac\": container with ID starting with 171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac not found: ID does not exist" containerID="171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.532674 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac"} err="failed to get container status \"171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac\": rpc error: code = NotFound desc = could not find container \"171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac\": container with ID starting with 171ac571380f5e0ad67a76b55f82be92f95880261cd86271ffe5552c2cda74ac not found: ID does not exist" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.532706 4970 scope.go:117] "RemoveContainer" containerID="35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256" Dec 09 13:40:33 crc kubenswrapper[4970]: E1209 13:40:33.533389 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256\": container with ID starting with 35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256 not found: ID does not exist" containerID="35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.533420 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256"} err="failed to get container status \"35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256\": rpc error: code = NotFound desc = could not find container \"35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256\": container with ID starting with 35c5937fba355ca14dd5aa829ad6411189deafcbf85081617b4f1312f54d8256 not found: ID does not exist" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.533442 4970 scope.go:117] "RemoveContainer" containerID="e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4" Dec 09 13:40:33 crc kubenswrapper[4970]: E1209 13:40:33.533666 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4\": container with ID starting with e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4 not found: ID does not exist" containerID="e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.533685 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4"} err="failed to get container status \"e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4\": rpc error: code = NotFound desc = could not find container \"e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4\": container with ID starting with e64f5640139b80932f671d7e956bdaa54056e54a3372e5d9fb9acdbf4cd90fc4 not found: ID does not exist" Dec 09 13:40:33 crc kubenswrapper[4970]: I1209 13:40:33.828288 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac762181-383d-4f5a-8414-b71be1e60415" path="/var/lib/kubelet/pods/ac762181-383d-4f5a-8414-b71be1e60415/volumes" Dec 09 13:40:42 crc kubenswrapper[4970]: E1209 13:40:42.816704 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:40:43 crc kubenswrapper[4970]: I1209 13:40:43.873976 4970 scope.go:117] "RemoveContainer" containerID="b19bd12dd370f6889331f7d56bd7698339c463496b7802a061314e42bcc79c46" Dec 09 13:40:43 crc kubenswrapper[4970]: I1209 13:40:43.902227 4970 scope.go:117] "RemoveContainer" containerID="c8879283eab36ba7525e2fd2c4bc4fc3f517ca27c1765a360a9ad1c1c3565996" Dec 09 13:40:45 crc kubenswrapper[4970]: E1209 13:40:45.816135 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:40:46 crc kubenswrapper[4970]: I1209 13:40:46.011368 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:40:46 crc kubenswrapper[4970]: I1209 13:40:46.011444 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:40:56 crc kubenswrapper[4970]: E1209 13:40:56.817362 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:40:58 crc kubenswrapper[4970]: E1209 13:40:58.816512 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:41:09 crc kubenswrapper[4970]: E1209 13:41:09.816264 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:41:12 crc kubenswrapper[4970]: E1209 13:41:12.814744 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:41:16 crc kubenswrapper[4970]: I1209 13:41:16.011168 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:41:16 crc kubenswrapper[4970]: I1209 13:41:16.011595 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:41:21 crc kubenswrapper[4970]: E1209 13:41:21.815301 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:41:23 crc kubenswrapper[4970]: E1209 13:41:23.813933 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:41:32 crc kubenswrapper[4970]: E1209 13:41:32.817738 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.959824 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b7lpd"] Dec 09 13:41:32 crc kubenswrapper[4970]: E1209 13:41:32.960381 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="registry-server" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960401 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="registry-server" Dec 09 13:41:32 crc kubenswrapper[4970]: E1209 13:41:32.960445 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="gather" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960455 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="gather" Dec 09 13:41:32 crc kubenswrapper[4970]: E1209 13:41:32.960482 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="copy" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960490 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="copy" Dec 09 13:41:32 crc kubenswrapper[4970]: E1209 13:41:32.960506 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="extract-content" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960516 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="extract-content" Dec 09 13:41:32 crc kubenswrapper[4970]: E1209 13:41:32.960551 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="extract-utilities" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960558 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="extract-utilities" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960816 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="gather" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960839 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac762181-383d-4f5a-8414-b71be1e60415" containerName="registry-server" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.960873 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="10b3359c-b6fa-40ba-bdbf-e374edd3a96a" containerName="copy" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.963051 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.978746 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7lpd"] Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.980891 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpb55\" (UniqueName: \"kubernetes.io/projected/96f24ead-d1fc-413c-b58e-ee72c72e94ee-kube-api-access-zpb55\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.980988 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-utilities\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:32 crc kubenswrapper[4970]: I1209 13:41:32.981238 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-catalog-content\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.083425 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-utilities\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.083626 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-catalog-content\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.083781 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpb55\" (UniqueName: \"kubernetes.io/projected/96f24ead-d1fc-413c-b58e-ee72c72e94ee-kube-api-access-zpb55\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.084011 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-utilities\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.084185 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-catalog-content\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.109430 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpb55\" (UniqueName: \"kubernetes.io/projected/96f24ead-d1fc-413c-b58e-ee72c72e94ee-kube-api-access-zpb55\") pod \"redhat-marketplace-b7lpd\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.284950 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:33 crc kubenswrapper[4970]: I1209 13:41:33.793969 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7lpd"] Dec 09 13:41:34 crc kubenswrapper[4970]: I1209 13:41:34.136077 4970 generic.go:334] "Generic (PLEG): container finished" podID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerID="ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61" exitCode=0 Dec 09 13:41:34 crc kubenswrapper[4970]: I1209 13:41:34.136146 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerDied","Data":"ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61"} Dec 09 13:41:34 crc kubenswrapper[4970]: I1209 13:41:34.136417 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerStarted","Data":"4d248175c57f2c0321dccd431de8ef2b728e861b0d7e554ebc32fbdd733ac8d2"} Dec 09 13:41:35 crc kubenswrapper[4970]: I1209 13:41:35.150891 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerStarted","Data":"90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1"} Dec 09 13:41:36 crc kubenswrapper[4970]: I1209 13:41:36.163918 4970 generic.go:334] "Generic (PLEG): container finished" podID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerID="90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1" exitCode=0 Dec 09 13:41:36 crc kubenswrapper[4970]: I1209 13:41:36.164009 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerDied","Data":"90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1"} Dec 09 13:41:37 crc kubenswrapper[4970]: I1209 13:41:37.177571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerStarted","Data":"8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd"} Dec 09 13:41:37 crc kubenswrapper[4970]: I1209 13:41:37.200817 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b7lpd" podStartSLOduration=2.766138714 podStartE2EDuration="5.200797971s" podCreationTimestamp="2025-12-09 13:41:32 +0000 UTC" firstStartedPulling="2025-12-09 13:41:34.138438028 +0000 UTC m=+5706.698919089" lastFinishedPulling="2025-12-09 13:41:36.573097275 +0000 UTC m=+5709.133578346" observedRunningTime="2025-12-09 13:41:37.19853019 +0000 UTC m=+5709.759011241" watchObservedRunningTime="2025-12-09 13:41:37.200797971 +0000 UTC m=+5709.761279022" Dec 09 13:41:37 crc kubenswrapper[4970]: E1209 13:41:37.822647 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:41:43 crc kubenswrapper[4970]: I1209 13:41:43.285914 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:43 crc kubenswrapper[4970]: I1209 13:41:43.288049 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:43 crc kubenswrapper[4970]: I1209 13:41:43.359219 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:43 crc kubenswrapper[4970]: I1209 13:41:43.666160 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:43 crc kubenswrapper[4970]: I1209 13:41:43.722921 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7lpd"] Dec 09 13:41:44 crc kubenswrapper[4970]: E1209 13:41:44.816544 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:41:45 crc kubenswrapper[4970]: I1209 13:41:45.642498 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b7lpd" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="registry-server" containerID="cri-o://8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd" gracePeriod=2 Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.011119 4970 patch_prober.go:28] interesting pod/machine-config-daemon-rtdjh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.011185 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.011348 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.012451 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4"} pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.012536 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerName="machine-config-daemon" containerID="cri-o://f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" gracePeriod=600 Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.259145 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.339310 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpb55\" (UniqueName: \"kubernetes.io/projected/96f24ead-d1fc-413c-b58e-ee72c72e94ee-kube-api-access-zpb55\") pod \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.339744 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-utilities\") pod \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.339924 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-catalog-content\") pod \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\" (UID: \"96f24ead-d1fc-413c-b58e-ee72c72e94ee\") " Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.340516 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-utilities" (OuterVolumeSpecName: "utilities") pod "96f24ead-d1fc-413c-b58e-ee72c72e94ee" (UID: "96f24ead-d1fc-413c-b58e-ee72c72e94ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.356538 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96f24ead-d1fc-413c-b58e-ee72c72e94ee-kube-api-access-zpb55" (OuterVolumeSpecName: "kube-api-access-zpb55") pod "96f24ead-d1fc-413c-b58e-ee72c72e94ee" (UID: "96f24ead-d1fc-413c-b58e-ee72c72e94ee"). InnerVolumeSpecName "kube-api-access-zpb55". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.375731 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96f24ead-d1fc-413c-b58e-ee72c72e94ee" (UID: "96f24ead-d1fc-413c-b58e-ee72c72e94ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.442087 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.442114 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96f24ead-d1fc-413c-b58e-ee72c72e94ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.442124 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpb55\" (UniqueName: \"kubernetes.io/projected/96f24ead-d1fc-413c-b58e-ee72c72e94ee-kube-api-access-zpb55\") on node \"crc\" DevicePath \"\"" Dec 09 13:41:46 crc kubenswrapper[4970]: E1209 13:41:46.648947 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.657739 4970 generic.go:334] "Generic (PLEG): container finished" podID="a283668d-a884-4d62-95e2-1f0ae672f61c" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" exitCode=0 Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.657808 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" event={"ID":"a283668d-a884-4d62-95e2-1f0ae672f61c","Type":"ContainerDied","Data":"f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4"} Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.657846 4970 scope.go:117] "RemoveContainer" containerID="cabf3221a15b6dc1a60a9e742837fab6972bbbc7f9f958e52f374513daac7197" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.658775 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:41:46 crc kubenswrapper[4970]: E1209 13:41:46.659337 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.664533 4970 generic.go:334] "Generic (PLEG): container finished" podID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerID="8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd" exitCode=0 Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.664573 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerDied","Data":"8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd"} Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.664599 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b7lpd" event={"ID":"96f24ead-d1fc-413c-b58e-ee72c72e94ee","Type":"ContainerDied","Data":"4d248175c57f2c0321dccd431de8ef2b728e861b0d7e554ebc32fbdd733ac8d2"} Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.664697 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b7lpd" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.692482 4970 scope.go:117] "RemoveContainer" containerID="8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.723829 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7lpd"] Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.737533 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b7lpd"] Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.752934 4970 scope.go:117] "RemoveContainer" containerID="90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.791499 4970 scope.go:117] "RemoveContainer" containerID="ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.844852 4970 scope.go:117] "RemoveContainer" containerID="8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd" Dec 09 13:41:46 crc kubenswrapper[4970]: E1209 13:41:46.845488 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd\": container with ID starting with 8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd not found: ID does not exist" containerID="8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.845541 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd"} err="failed to get container status \"8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd\": rpc error: code = NotFound desc = could not find container \"8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd\": container with ID starting with 8eead0a21cf70e0c08d2f7c412c9e1ec1449490bd121c162ee15419aee3575bd not found: ID does not exist" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.845568 4970 scope.go:117] "RemoveContainer" containerID="90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1" Dec 09 13:41:46 crc kubenswrapper[4970]: E1209 13:41:46.846044 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1\": container with ID starting with 90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1 not found: ID does not exist" containerID="90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.846091 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1"} err="failed to get container status \"90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1\": rpc error: code = NotFound desc = could not find container \"90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1\": container with ID starting with 90e15288c0e0d8aea099df2fe052aed6c39db500a00056346a1d8d2f3145fce1 not found: ID does not exist" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.846123 4970 scope.go:117] "RemoveContainer" containerID="ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61" Dec 09 13:41:46 crc kubenswrapper[4970]: E1209 13:41:46.846496 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61\": container with ID starting with ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61 not found: ID does not exist" containerID="ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61" Dec 09 13:41:46 crc kubenswrapper[4970]: I1209 13:41:46.846518 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61"} err="failed to get container status \"ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61\": rpc error: code = NotFound desc = could not find container \"ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61\": container with ID starting with ee8e6b5c295e4d79060909ccc131b364367796ac0b3f976b84b34a565bc4ad61 not found: ID does not exist" Dec 09 13:41:47 crc kubenswrapper[4970]: I1209 13:41:47.831350 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" path="/var/lib/kubelet/pods/96f24ead-d1fc-413c-b58e-ee72c72e94ee/volumes" Dec 09 13:41:48 crc kubenswrapper[4970]: E1209 13:41:48.814997 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:41:55 crc kubenswrapper[4970]: E1209 13:41:55.815197 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:41:58 crc kubenswrapper[4970]: I1209 13:41:58.813278 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:41:58 crc kubenswrapper[4970]: E1209 13:41:58.814261 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:42:03 crc kubenswrapper[4970]: E1209 13:42:03.816508 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:42:10 crc kubenswrapper[4970]: E1209 13:42:10.819108 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:42:13 crc kubenswrapper[4970]: I1209 13:42:13.813006 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:42:13 crc kubenswrapper[4970]: E1209 13:42:13.814001 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:42:14 crc kubenswrapper[4970]: E1209 13:42:14.815225 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:42:21 crc kubenswrapper[4970]: E1209 13:42:21.816564 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:42:25 crc kubenswrapper[4970]: E1209 13:42:25.815189 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:42:27 crc kubenswrapper[4970]: I1209 13:42:27.822783 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:42:27 crc kubenswrapper[4970]: E1209 13:42:27.823519 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:42:32 crc kubenswrapper[4970]: E1209 13:42:32.817626 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:42:40 crc kubenswrapper[4970]: E1209 13:42:40.815007 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:42:41 crc kubenswrapper[4970]: I1209 13:42:41.813637 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:42:41 crc kubenswrapper[4970]: E1209 13:42:41.814272 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:42:44 crc kubenswrapper[4970]: E1209 13:42:44.815190 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:42:52 crc kubenswrapper[4970]: E1209 13:42:52.816463 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:42:54 crc kubenswrapper[4970]: I1209 13:42:54.814336 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:42:54 crc kubenswrapper[4970]: E1209 13:42:54.815237 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:42:58 crc kubenswrapper[4970]: E1209 13:42:58.815045 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:43:05 crc kubenswrapper[4970]: I1209 13:43:05.813231 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:43:05 crc kubenswrapper[4970]: E1209 13:43:05.814994 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:43:06 crc kubenswrapper[4970]: E1209 13:43:06.816356 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:43:09 crc kubenswrapper[4970]: E1209 13:43:09.817320 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:43:18 crc kubenswrapper[4970]: I1209 13:43:18.813855 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:43:18 crc kubenswrapper[4970]: E1209 13:43:18.817654 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:43:20 crc kubenswrapper[4970]: E1209 13:43:20.816754 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:43:20 crc kubenswrapper[4970]: E1209 13:43:20.816822 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:43:31 crc kubenswrapper[4970]: E1209 13:43:31.816638 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:43:32 crc kubenswrapper[4970]: I1209 13:43:32.812763 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:43:32 crc kubenswrapper[4970]: E1209 13:43:32.813443 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:43:33 crc kubenswrapper[4970]: E1209 13:43:33.815130 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:43:42 crc kubenswrapper[4970]: E1209 13:43:42.840766 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:43:45 crc kubenswrapper[4970]: E1209 13:43:45.815473 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:43:47 crc kubenswrapper[4970]: I1209 13:43:47.827035 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:43:47 crc kubenswrapper[4970]: E1209 13:43:47.828119 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:43:54 crc kubenswrapper[4970]: E1209 13:43:54.819466 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:43:59 crc kubenswrapper[4970]: E1209 13:43:59.834791 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:44:02 crc kubenswrapper[4970]: I1209 13:44:02.812703 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:44:02 crc kubenswrapper[4970]: E1209 13:44:02.813300 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:44:08 crc kubenswrapper[4970]: E1209 13:44:08.815838 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:44:11 crc kubenswrapper[4970]: E1209 13:44:11.820291 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:44:13 crc kubenswrapper[4970]: I1209 13:44:13.813211 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:44:13 crc kubenswrapper[4970]: E1209 13:44:13.814029 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:44:22 crc kubenswrapper[4970]: E1209 13:44:22.816450 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:44:23 crc kubenswrapper[4970]: E1209 13:44:23.816046 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:44:25 crc kubenswrapper[4970]: I1209 13:44:25.812872 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:44:25 crc kubenswrapper[4970]: E1209 13:44:25.814114 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:44:34 crc kubenswrapper[4970]: E1209 13:44:34.816166 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:44:35 crc kubenswrapper[4970]: I1209 13:44:35.815116 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 13:44:35 crc kubenswrapper[4970]: E1209 13:44:35.942139 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:44:35 crc kubenswrapper[4970]: E1209 13:44:35.942507 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Dec 09 13:44:35 crc kubenswrapper[4970]: E1209 13:44:35.942658 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5gl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-snjvx_openstack(cda5b8c0-51ad-4a63-bf1d-e3546f098ad3): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:44:35 crc kubenswrapper[4970]: E1209 13:44:35.943836 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-heat-engine: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:44:40 crc kubenswrapper[4970]: I1209 13:44:40.812643 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:44:40 crc kubenswrapper[4970]: E1209 13:44:40.813592 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:44:46 crc kubenswrapper[4970]: E1209 13:44:46.827214 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:44:47 crc kubenswrapper[4970]: E1209 13:44:47.815333 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:44:51 crc kubenswrapper[4970]: I1209 13:44:51.817110 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:44:51 crc kubenswrapper[4970]: E1209 13:44:51.817865 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:44:58 crc kubenswrapper[4970]: E1209 13:44:58.940504 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:44:58 crc kubenswrapper[4970]: E1209 13:44:58.941071 4970 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Dec 09 13:44:58 crc kubenswrapper[4970]: E1209 13:44:58.941218 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n679h5b7h5f8h695h656h57bh5c7h7bh5c5hc4h5hbch654h5h5b7h54dh5fh688h548h57bh6fh54fh5d4hf7h5ffh54ch566h565hdchb4h65dh5fbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqqs9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(ea52f6b9-599e-4ac5-94c6-79949c705be8): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Dec 09 13:44:58 crc kubenswrapper[4970]: E1209 13:44:58.942457 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested: reading manifest current-tested in quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central: unknown: Tag current-tested was deleted or has expired. To pull, revive via time machine\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.163803 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt"] Dec 09 13:45:00 crc kubenswrapper[4970]: E1209 13:45:00.164438 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="registry-server" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.164454 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="registry-server" Dec 09 13:45:00 crc kubenswrapper[4970]: E1209 13:45:00.164469 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="extract-utilities" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.164478 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="extract-utilities" Dec 09 13:45:00 crc kubenswrapper[4970]: E1209 13:45:00.164523 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="extract-content" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.164531 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="extract-content" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.164871 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="96f24ead-d1fc-413c-b58e-ee72c72e94ee" containerName="registry-server" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.166022 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.169590 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.170345 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.194993 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt"] Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.292037 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24e55c59-9ef3-45b3-bd7d-931fb421bd82-secret-volume\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.292118 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qxql\" (UniqueName: \"kubernetes.io/projected/24e55c59-9ef3-45b3-bd7d-931fb421bd82-kube-api-access-8qxql\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.292767 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24e55c59-9ef3-45b3-bd7d-931fb421bd82-config-volume\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.395574 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24e55c59-9ef3-45b3-bd7d-931fb421bd82-secret-volume\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.395644 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qxql\" (UniqueName: \"kubernetes.io/projected/24e55c59-9ef3-45b3-bd7d-931fb421bd82-kube-api-access-8qxql\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.395954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24e55c59-9ef3-45b3-bd7d-931fb421bd82-config-volume\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.397654 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24e55c59-9ef3-45b3-bd7d-931fb421bd82-config-volume\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.769067 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24e55c59-9ef3-45b3-bd7d-931fb421bd82-secret-volume\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.769326 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qxql\" (UniqueName: \"kubernetes.io/projected/24e55c59-9ef3-45b3-bd7d-931fb421bd82-kube-api-access-8qxql\") pod \"collect-profiles-29421465-5f8pt\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:00 crc kubenswrapper[4970]: I1209 13:45:00.794610 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:01 crc kubenswrapper[4970]: I1209 13:45:01.253544 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt"] Dec 09 13:45:01 crc kubenswrapper[4970]: W1209 13:45:01.260696 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e55c59_9ef3_45b3_bd7d_931fb421bd82.slice/crio-8bdc5f060daeb2ad06164624f0f7eb68a1833d176634a0c7da71576f0b10a94e WatchSource:0}: Error finding container 8bdc5f060daeb2ad06164624f0f7eb68a1833d176634a0c7da71576f0b10a94e: Status 404 returned error can't find the container with id 8bdc5f060daeb2ad06164624f0f7eb68a1833d176634a0c7da71576f0b10a94e Dec 09 13:45:01 crc kubenswrapper[4970]: E1209 13:45:01.815619 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:45:02 crc kubenswrapper[4970]: I1209 13:45:02.130297 4970 generic.go:334] "Generic (PLEG): container finished" podID="24e55c59-9ef3-45b3-bd7d-931fb421bd82" containerID="67e19b72b221175f153a6fb602297133bbcae97c9a3c61d73925d1578c3ec473" exitCode=0 Dec 09 13:45:02 crc kubenswrapper[4970]: I1209 13:45:02.130647 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" event={"ID":"24e55c59-9ef3-45b3-bd7d-931fb421bd82","Type":"ContainerDied","Data":"67e19b72b221175f153a6fb602297133bbcae97c9a3c61d73925d1578c3ec473"} Dec 09 13:45:02 crc kubenswrapper[4970]: I1209 13:45:02.130736 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" event={"ID":"24e55c59-9ef3-45b3-bd7d-931fb421bd82","Type":"ContainerStarted","Data":"8bdc5f060daeb2ad06164624f0f7eb68a1833d176634a0c7da71576f0b10a94e"} Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.609985 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.684111 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24e55c59-9ef3-45b3-bd7d-931fb421bd82-config-volume\") pod \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.684345 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qxql\" (UniqueName: \"kubernetes.io/projected/24e55c59-9ef3-45b3-bd7d-931fb421bd82-kube-api-access-8qxql\") pod \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.684497 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24e55c59-9ef3-45b3-bd7d-931fb421bd82-secret-volume\") pod \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\" (UID: \"24e55c59-9ef3-45b3-bd7d-931fb421bd82\") " Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.685120 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24e55c59-9ef3-45b3-bd7d-931fb421bd82-config-volume" (OuterVolumeSpecName: "config-volume") pod "24e55c59-9ef3-45b3-bd7d-931fb421bd82" (UID: "24e55c59-9ef3-45b3-bd7d-931fb421bd82"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.754372 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e55c59-9ef3-45b3-bd7d-931fb421bd82-kube-api-access-8qxql" (OuterVolumeSpecName: "kube-api-access-8qxql") pod "24e55c59-9ef3-45b3-bd7d-931fb421bd82" (UID: "24e55c59-9ef3-45b3-bd7d-931fb421bd82"). InnerVolumeSpecName "kube-api-access-8qxql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.757336 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24e55c59-9ef3-45b3-bd7d-931fb421bd82-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24e55c59-9ef3-45b3-bd7d-931fb421bd82" (UID: "24e55c59-9ef3-45b3-bd7d-931fb421bd82"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.788428 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qxql\" (UniqueName: \"kubernetes.io/projected/24e55c59-9ef3-45b3-bd7d-931fb421bd82-kube-api-access-8qxql\") on node \"crc\" DevicePath \"\"" Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.788466 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24e55c59-9ef3-45b3-bd7d-931fb421bd82-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:45:03 crc kubenswrapper[4970]: I1209 13:45:03.788481 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24e55c59-9ef3-45b3-bd7d-931fb421bd82-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 13:45:04 crc kubenswrapper[4970]: I1209 13:45:04.174387 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" event={"ID":"24e55c59-9ef3-45b3-bd7d-931fb421bd82","Type":"ContainerDied","Data":"8bdc5f060daeb2ad06164624f0f7eb68a1833d176634a0c7da71576f0b10a94e"} Dec 09 13:45:04 crc kubenswrapper[4970]: I1209 13:45:04.174441 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bdc5f060daeb2ad06164624f0f7eb68a1833d176634a0c7da71576f0b10a94e" Dec 09 13:45:04 crc kubenswrapper[4970]: I1209 13:45:04.174421 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421465-5f8pt" Dec 09 13:45:04 crc kubenswrapper[4970]: I1209 13:45:04.698668 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm"] Dec 09 13:45:04 crc kubenswrapper[4970]: I1209 13:45:04.711826 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421420-jt5nm"] Dec 09 13:45:05 crc kubenswrapper[4970]: I1209 13:45:05.814054 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:45:05 crc kubenswrapper[4970]: E1209 13:45:05.814828 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:45:05 crc kubenswrapper[4970]: I1209 13:45:05.830296 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0" path="/var/lib/kubelet/pods/58d90aed-bbd6-4bf3-8fc0-34d12dcc0bf0/volumes" Dec 09 13:45:11 crc kubenswrapper[4970]: E1209 13:45:11.814589 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:45:14 crc kubenswrapper[4970]: E1209 13:45:14.815157 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3" Dec 09 13:45:18 crc kubenswrapper[4970]: I1209 13:45:18.815227 4970 scope.go:117] "RemoveContainer" containerID="f9298302e04f8b32c827b5739d34a3e136886eeebee325733064ada8665b89c4" Dec 09 13:45:18 crc kubenswrapper[4970]: E1209 13:45:18.816968 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rtdjh_openshift-machine-config-operator(a283668d-a884-4d62-95e2-1f0ae672f61c)\"" pod="openshift-machine-config-operator/machine-config-daemon-rtdjh" podUID="a283668d-a884-4d62-95e2-1f0ae672f61c" Dec 09 13:45:23 crc kubenswrapper[4970]: E1209 13:45:23.815872 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="ea52f6b9-599e-4ac5-94c6-79949c705be8" Dec 09 13:45:29 crc kubenswrapper[4970]: E1209 13:45:29.815920 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-snjvx" podUID="cda5b8c0-51ad-4a63-bf1d-e3546f098ad3"